id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
list | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
15,070 | I386 | The Intel 386, originally released as 80386 and later renamed i386, is a 32-bit microprocessor introduced in 1985. The first versions had 275,000 transistors and were the central processing unit (CPU) of many workstations and high-end personal computers of the time.
The 20 MHz version operates at 4–5 MIPS. It also performs between 8,000 and 9,000 Dhrystones per second. The 25 MHz 386 version was clocking 7 MIPS. A 33 MHz 80386 was reportedly measured to operate at about 11.4 MIPS. At that same speed, it has the performance of 8 VAX MIPS. These processors were running about 4.4 clocks per instruction.
Development of i386 technology began in 1982 under the internal name of P3. The tape-out of the 80386 development was finalized in July 1985. The 80386 was introduced as pre-production samples for software development workstations in October 1985. Manufacturing of the chips in significant quantities commenced in June 1986, along with the first plug-in device that allowed existing 80286-based computers to be upgraded to the 386, the Translator 386 by American Computer and Peripheral. Mainboards for 80386-based computer systems were cumbersome and expensive at first, but manufacturing was justified upon the 80386's mainstream adoption. The first personal computer to make use of the 80386 was the Deskpro 386, designed and manufactured by Compaq; this marked the first time a fundamental component in the IBM PC compatible de facto standard was updated by a company other than IBM.
In May 2006, Intel announced that i386 production would stop at the end of September 2007. Although it had long been obsolete as a personal computer CPU, Intel and others had continued making the chip for embedded systems. Such systems using an i386 or one of many derivatives are common in aerospace technology and electronic musical instruments, among others. Some mobile phones also used (later fully static CMOS variants of) the i386 processor, such as the BlackBerry 950 and Nokia 9000 Communicator. Linux continued to support i386 processors until December 11, 2012, when the kernel cut 386-specific instructions in version 3.8.
The 32-bit i386 can correctly execute most code intended for the earlier 16-bit processors such as 8086 and 80286 that were ubiquitous in early PCs. As the original implementation of the 32-bit extension of the 80286 architecture, the i386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the i386 architecture, x86, or IA-32, depending on context. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386 (and thousands of times faster than the 8086).
The processor was a significant evolution in the x86 architecture, and extended a long line of processors that stretched back to the Intel 8008. The predecessor of the 80386 was the Intel 80286, a 16-bit processor with a segment-based memory management and protection system. The 80386 added a three-stage instruction pipeline which it brings up to total of 6-stage instruction pipeline, extended the architecture from 16-bits to 32-bits, and added an on-chip memory management unit. This paging translation unit made it much easier to implement operating systems that used virtual memory. It also offered support for register debugging.
The 80386 featured three operating modes: real mode, protected mode and virtual mode. The protected mode, which debuted in the 286, was extended to allow the 386 to address up to 4 GB of memory. With the addition of segmented addressing system, it can expand up to 64 terabytes of virtual memory. The all new virtual 8086 mode (or VM86) made it possible to run one or more real mode programs in a protected environment, although some programs were not compatible. It features scaled indexing and 64-bit barrel shifter.
The ability for a 386 to be set up to act like it had a flat memory model in protected mode despite the fact that it uses a segmented memory model in all modes was arguably the most important feature change for the x86 processor family until AMD released x86-64 in 2003.
Several new instructions have been added to 386: BSF, BSR, BT, BTS, BTR, BTC, CDQ, CWDE, LFS, LGS, LSS, MOVSX, MOVZX, SETcc, SHLD, SHRD.
Two new segment registers have been added (FS and GS) for general-purpose programs, single Machine Status Word of 286 grew into eight control registers CR0–CR7. Debug registers DR0–DR7 were added for hardware breakpoints. New forms of MOV instruction are used to access them.
Chief architect in the development of the 80386 was John H. Crawford. He was responsible for extending the 80286 architecture and instruction set to 32-bit, and then led the microprogram development for the 80386 chip.
The i486 and P5 Pentium line of processors were descendants of the i386 design.
The following data types are directly supported and thus implemented by one or more i386 machine instructions; these data types are briefly described here.:
The following i386 assembly source code is for a subroutine named _strtolower that copies a null-terminated ASCIIZ character string from one location to another, converting all alphabetic characters to lower case. The string is copied one byte (8-bit character) at a time.
The example code uses the EBP (base pointer) register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine. This kind of calling convention supports reentrant and recursive code and has been used by Algol-like languages since the late 1950s. A flat memory model is assumed, specifically, that the DS and ES segments address the same region of memory.
The first PC based on the Intel 80386 was the Compaq Deskpro 386. By extending the 16/24-bit IBM PC/AT standard into a natively 32-bit computing environment, Compaq became the first company to design and manufacture such a major technical hardware advance on the PC platform. IBM was offered use of the 80386, but had manufacturing rights for the earlier 80286. IBM therefore chose to rely on that processor for a couple more years. The early success of the Compaq Deskpro 386 played an important role in legitimizing the PC "clone" industry and in de-emphasizing IBM's role within it. The first computer system sold with the 386SX was the Compaq Deskpro 386S, released in July 1988.
Prior to the 386, the difficulty of manufacturing microchips and the uncertainty of reliable supply made it desirable that any mass-market semiconductor be multi-sourced, that is, made by two or more manufacturers, the second and subsequent companies manufacturing under license from the originating company. The 386 was for a time (4.7 years) only available from Intel, since Andy Grove, Intel's CEO at the time, made the decision not to encourage other manufacturers to produce the processor as second sources. This decision was ultimately crucial to Intel's success in the market. The 386 was the first significant microprocessor to be single-sourced. Single-sourcing the 386 allowed Intel greater control over its development and substantially greater profits in later years.
AMD introduced its compatible Am386 processor in March 1991 after overcoming legal obstacles, thus ending Intel's 4.7-year monopoly on 386-compatible processors. From 1991 IBM also manufactured 386 chips under license for use only in IBM PCs and boards.
Intel originally intended for the 80386 to debut at 16 MHz. However, due to poor yields, it was instead introduced at 12.5 MHz.
Early in production, Intel discovered a marginal circuit that could cause a system to return incorrect results from 32-bit multiply operations. Not all of the processors already manufactured were affected, so Intel tested its inventory. Processors that were found to be bug-free were marked with a double sigma (ΣΣ), and affected processors were marked "16 BIT S/W ONLY". These latter processors were sold as good parts, since at the time 32-bit capability was not relevant for most users.
The i387 math coprocessor was not ready in time for the introduction of the 80386, and so many of the early 80386 motherboards instead provided a socket and hardware logic to make use of an 80287. In this configuration the FPU operated asynchronously to the CPU, usually with a clock rate of 10 MHz. The original Compaq Deskpro 386 is an example of such design. However, this was an annoyance to those who depended on floating-point performance, as the performance advantages of the 80387 over the 80287 were significant.
Intel later offered a modified version of its 486DX in i386 packaging, branded as the Intel RapidCAD. This provided an upgrade path for users with i386-compatible hardware. The upgrade was a pair of chips that replaced both the i386 and i387. Since the 486DX design contained an FPU, the chip that replaced the i386 contained the floating-point functionality, and the chip that replaced the i387 served very little purpose. However, the latter chip was necessary in order to provide the FERR signal to the mainboard and appear to function as a normal floating-point unit.
Third parties offered a wide range of upgrades, for both SX and DX systems. The most popular ones were based on the Cyrix 486DLC/SLC core, which typically offered a substantial speed improvement due to its more efficient instruction pipeline and internal L1 SRAM cache. The cache was usually 1 KB, or sometimes 8 KB in the TI variant. Some of these upgrade chips (such as the 486DRx2/SRx2) were marketed by Cyrix themselves, but they were more commonly found in kits offered by upgrade specialists such as Kingston, Evergreen Technologies and Improve-It Technologies. Some of the fastest CPU upgrade modules featured the IBM SLC/DLC family (notable for its 16 KB L1 cache), or even the Intel 486 itself. Many 386 upgrade kits were advertised as being simple drop-in replacements, but often required complicated software to control the cache or clock doubling. Part of the problem was that on most 386 motherboards, the A20 line was controlled entirely by the motherboard with the CPU being unaware, which caused problems on CPUs with internal caches.
Overall, it was very difficult to configure upgrades to produce the results advertised on the packaging, and upgrades were often not very stable or not fully compatible.
Original version, released in October 1985. The 16 MHz version was available for 299 USD in quantities of 100. The 20 MHz version was available for US$599 in quantities of 100. The 33 Mhz version was available on April 10th, 1989.
The military version was made using the CHMOS III process technology. It was made to withstand 105 Rads (Si) or greater. It was available for US$945 each in quantities of 100.
In 1988, Intel introduced the 80386SX, most often referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus, mainly intended for lower-cost PCs aimed at the home, educational, and small-business markets, while the 386DX remained the high-end variant used in workstations, servers, and other demanding tasks. The CPU remained fully 32-bit internally, but the 16-bit bus was intended to simplify circuit-board layout and reduce total cost. The 16-bit bus simplified designs but hampered performance. Only 24 pins were connected to the address bus, therefore limiting addressing to 16 MB, but this was not a critical constraint at the time. Performance differences were due not only to differing data-bus widths, but also due to performance-enhancing cache memories often employed on boards using the original chip.
The original 80386 was subsequently renamed i386DX to avoid confusion. However, Intel subsequently used the "DX" suffix to refer to the floating-point capability of the i486DX. The 387SX was an 80387 part that was compatible with the 386SX (i.e. with a 16-bit databus). The 386SX was packaged in a surface-mount QFP and sometimes offered in a socket to allow for an upgrade.
The 16 MHz 386SX contains the 100-lead BQFP. It was available for USD $165 in quantites of 1000. It has the performance of 2.5 to 3 MIPS as well. The low-power version was available on April 10, 1989. This version that uses 20 to 30 percent less power and has higher operating temperature up to 100°C than the regular version.
The 80386SL was introduced as a power-efficient version for laptop computers. The processor offered several power-management options (e.g. SMM), as well as different "sleep" modes to conserve battery power. It also contained support for an external cache of 16 to 64 KB. The extra functions and circuit implementation techniques caused this variant to have over 3 times as many transistors as the i386DX. The i386SL was first available at 20 MHz clock speed, with the 25 MHz model later added.
Dave Vannier, the chief architect designed this microprocessor. It took them two years to complete this design since it uses the existing 386 architecture to implement. That assist with advanced computer-aided design tools which includes a complete simulation of system board. This die contains the 386 CPU core, AT Bus Controller, Memory Controller, Internal Bus Controller, Cache Control Logic along with Cache Tag SRAM and Clock. This CPU contains 855,000 transistors using one-micron CHMOS IV technology. It was available for USD $176 in 1,000 unit in quantities.
A specially packaged Intel 486DX and a dummy floating-point unit (FPU) designed as pin-compatible replacements for an i386 processor and i387 FPU.
This was an embedded version of the 80386SX which did not support real mode and paging in the MMU.
System and power management and built in peripheral and support functions: Two 82C59A interrupt controllers; Timer, Counter (3 channels); Asynchronous SIO (2 channels); Synchronous SIO (1 channel); Watchdog timer (Hardware/Software); PIO. Usable with 80387SX or i387SL FPUs.
Transparent power management mode, integrated MMU and TTL compatible inputs (only 386SXSA). Usable with i387SX or i387SL FPUs.
Transparent power management mode and integrated MMU. Usable with i387SX or i387SL FPUs.
Windows 95 was the only entry in the Windows 9x series to officially support the 386, requiring at least a 386DX, though a 486 or better was recommended; Windows 98 requires a 486DX or higher. In the Windows NT family, Windows NT 3.51 was the last version with 386 support.
Debian GNU/Linux dropped 386 support with the release of 3.1 (Sarge) in 2005 and completely removed support in 2007 with 4.0 (Etch). Citing the maintenance burden around SMP primitives, the Linux kernel developers cut support from the development codebase in December 2012, later released as kernel version 3.8.
Among the BSDs, FreeBSD's 5.x releases were the last to support the 386; support for the 386SX was cut with release 5.2, while the remaining 386 support was removed with the 6.0 release in 2005. OpenBSD removed 386 support with version 4.2 (2007), DragonFly BSD with release 1.12 (2008), and NetBSD with the 5.0 release (2009). | [
{
"paragraph_id": 0,
"text": "The Intel 386, originally released as 80386 and later renamed i386, is a 32-bit microprocessor introduced in 1985. The first versions had 275,000 transistors and were the central processing unit (CPU) of many workstations and high-end personal computers of the time.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The 20 MHz version operates at 4–5 MIPS. It also performs between 8,000 and 9,000 Dhrystones per second. The 25 MHz 386 version was clocking 7 MIPS. A 33 MHz 80386 was reportedly measured to operate at about 11.4 MIPS. At that same speed, it has the performance of 8 VAX MIPS. These processors were running about 4.4 clocks per instruction.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Development of i386 technology began in 1982 under the internal name of P3. The tape-out of the 80386 development was finalized in July 1985. The 80386 was introduced as pre-production samples for software development workstations in October 1985. Manufacturing of the chips in significant quantities commenced in June 1986, along with the first plug-in device that allowed existing 80286-based computers to be upgraded to the 386, the Translator 386 by American Computer and Peripheral. Mainboards for 80386-based computer systems were cumbersome and expensive at first, but manufacturing was justified upon the 80386's mainstream adoption. The first personal computer to make use of the 80386 was the Deskpro 386, designed and manufactured by Compaq; this marked the first time a fundamental component in the IBM PC compatible de facto standard was updated by a company other than IBM.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In May 2006, Intel announced that i386 production would stop at the end of September 2007. Although it had long been obsolete as a personal computer CPU, Intel and others had continued making the chip for embedded systems. Such systems using an i386 or one of many derivatives are common in aerospace technology and electronic musical instruments, among others. Some mobile phones also used (later fully static CMOS variants of) the i386 processor, such as the BlackBerry 950 and Nokia 9000 Communicator. Linux continued to support i386 processors until December 11, 2012, when the kernel cut 386-specific instructions in version 3.8.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The 32-bit i386 can correctly execute most code intended for the earlier 16-bit processors such as 8086 and 80286 that were ubiquitous in early PCs. As the original implementation of the 32-bit extension of the 80286 architecture, the i386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the i386 architecture, x86, or IA-32, depending on context. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386 (and thousands of times faster than the 8086).",
"title": ""
},
{
"paragraph_id": 5,
"text": "The processor was a significant evolution in the x86 architecture, and extended a long line of processors that stretched back to the Intel 8008. The predecessor of the 80386 was the Intel 80286, a 16-bit processor with a segment-based memory management and protection system. The 80386 added a three-stage instruction pipeline which it brings up to total of 6-stage instruction pipeline, extended the architecture from 16-bits to 32-bits, and added an on-chip memory management unit. This paging translation unit made it much easier to implement operating systems that used virtual memory. It also offered support for register debugging.",
"title": "Architecture"
},
{
"paragraph_id": 6,
"text": "The 80386 featured three operating modes: real mode, protected mode and virtual mode. The protected mode, which debuted in the 286, was extended to allow the 386 to address up to 4 GB of memory. With the addition of segmented addressing system, it can expand up to 64 terabytes of virtual memory. The all new virtual 8086 mode (or VM86) made it possible to run one or more real mode programs in a protected environment, although some programs were not compatible. It features scaled indexing and 64-bit barrel shifter.",
"title": "Architecture"
},
{
"paragraph_id": 7,
"text": "The ability for a 386 to be set up to act like it had a flat memory model in protected mode despite the fact that it uses a segmented memory model in all modes was arguably the most important feature change for the x86 processor family until AMD released x86-64 in 2003.",
"title": "Architecture"
},
{
"paragraph_id": 8,
"text": "Several new instructions have been added to 386: BSF, BSR, BT, BTS, BTR, BTC, CDQ, CWDE, LFS, LGS, LSS, MOVSX, MOVZX, SETcc, SHLD, SHRD.",
"title": "Architecture"
},
{
"paragraph_id": 9,
"text": "Two new segment registers have been added (FS and GS) for general-purpose programs, single Machine Status Word of 286 grew into eight control registers CR0–CR7. Debug registers DR0–DR7 were added for hardware breakpoints. New forms of MOV instruction are used to access them.",
"title": "Architecture"
},
{
"paragraph_id": 10,
"text": "Chief architect in the development of the 80386 was John H. Crawford. He was responsible for extending the 80286 architecture and instruction set to 32-bit, and then led the microprogram development for the 80386 chip.",
"title": "Architecture"
},
{
"paragraph_id": 11,
"text": "The i486 and P5 Pentium line of processors were descendants of the i386 design.",
"title": "Architecture"
},
{
"paragraph_id": 12,
"text": "The following data types are directly supported and thus implemented by one or more i386 machine instructions; these data types are briefly described here.:",
"title": "Architecture"
},
{
"paragraph_id": 13,
"text": "The following i386 assembly source code is for a subroutine named _strtolower that copies a null-terminated ASCIIZ character string from one location to another, converting all alphabetic characters to lower case. The string is copied one byte (8-bit character) at a time.",
"title": "Example code"
},
{
"paragraph_id": 14,
"text": "The example code uses the EBP (base pointer) register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine. This kind of calling convention supports reentrant and recursive code and has been used by Algol-like languages since the late 1950s. A flat memory model is assumed, specifically, that the DS and ES segments address the same region of memory.",
"title": "Example code"
},
{
"paragraph_id": 15,
"text": "The first PC based on the Intel 80386 was the Compaq Deskpro 386. By extending the 16/24-bit IBM PC/AT standard into a natively 32-bit computing environment, Compaq became the first company to design and manufacture such a major technical hardware advance on the PC platform. IBM was offered use of the 80386, but had manufacturing rights for the earlier 80286. IBM therefore chose to rely on that processor for a couple more years. The early success of the Compaq Deskpro 386 played an important role in legitimizing the PC \"clone\" industry and in de-emphasizing IBM's role within it. The first computer system sold with the 386SX was the Compaq Deskpro 386S, released in July 1988.",
"title": "Business importance"
},
{
"paragraph_id": 16,
"text": "Prior to the 386, the difficulty of manufacturing microchips and the uncertainty of reliable supply made it desirable that any mass-market semiconductor be multi-sourced, that is, made by two or more manufacturers, the second and subsequent companies manufacturing under license from the originating company. The 386 was for a time (4.7 years) only available from Intel, since Andy Grove, Intel's CEO at the time, made the decision not to encourage other manufacturers to produce the processor as second sources. This decision was ultimately crucial to Intel's success in the market. The 386 was the first significant microprocessor to be single-sourced. Single-sourcing the 386 allowed Intel greater control over its development and substantially greater profits in later years.",
"title": "Business importance"
},
{
"paragraph_id": 17,
"text": "AMD introduced its compatible Am386 processor in March 1991 after overcoming legal obstacles, thus ending Intel's 4.7-year monopoly on 386-compatible processors. From 1991 IBM also manufactured 386 chips under license for use only in IBM PCs and boards.",
"title": "Business importance"
},
{
"paragraph_id": 18,
"text": "Intel originally intended for the 80386 to debut at 16 MHz. However, due to poor yields, it was instead introduced at 12.5 MHz.",
"title": "Early problems"
},
{
"paragraph_id": 19,
"text": "Early in production, Intel discovered a marginal circuit that could cause a system to return incorrect results from 32-bit multiply operations. Not all of the processors already manufactured were affected, so Intel tested its inventory. Processors that were found to be bug-free were marked with a double sigma (ΣΣ), and affected processors were marked \"16 BIT S/W ONLY\". These latter processors were sold as good parts, since at the time 32-bit capability was not relevant for most users.",
"title": "Early problems"
},
{
"paragraph_id": 20,
"text": "The i387 math coprocessor was not ready in time for the introduction of the 80386, and so many of the early 80386 motherboards instead provided a socket and hardware logic to make use of an 80287. In this configuration the FPU operated asynchronously to the CPU, usually with a clock rate of 10 MHz. The original Compaq Deskpro 386 is an example of such design. However, this was an annoyance to those who depended on floating-point performance, as the performance advantages of the 80387 over the 80287 were significant.",
"title": "Early problems"
},
{
"paragraph_id": 21,
"text": "Intel later offered a modified version of its 486DX in i386 packaging, branded as the Intel RapidCAD. This provided an upgrade path for users with i386-compatible hardware. The upgrade was a pair of chips that replaced both the i386 and i387. Since the 486DX design contained an FPU, the chip that replaced the i386 contained the floating-point functionality, and the chip that replaced the i387 served very little purpose. However, the latter chip was necessary in order to provide the FERR signal to the mainboard and appear to function as a normal floating-point unit.",
"title": "Pin-compatible upgrades"
},
{
"paragraph_id": 22,
"text": "Third parties offered a wide range of upgrades, for both SX and DX systems. The most popular ones were based on the Cyrix 486DLC/SLC core, which typically offered a substantial speed improvement due to its more efficient instruction pipeline and internal L1 SRAM cache. The cache was usually 1 KB, or sometimes 8 KB in the TI variant. Some of these upgrade chips (such as the 486DRx2/SRx2) were marketed by Cyrix themselves, but they were more commonly found in kits offered by upgrade specialists such as Kingston, Evergreen Technologies and Improve-It Technologies. Some of the fastest CPU upgrade modules featured the IBM SLC/DLC family (notable for its 16 KB L1 cache), or even the Intel 486 itself. Many 386 upgrade kits were advertised as being simple drop-in replacements, but often required complicated software to control the cache or clock doubling. Part of the problem was that on most 386 motherboards, the A20 line was controlled entirely by the motherboard with the CPU being unaware, which caused problems on CPUs with internal caches.",
"title": "Pin-compatible upgrades"
},
{
"paragraph_id": 23,
"text": "Overall, it was very difficult to configure upgrades to produce the results advertised on the packaging, and upgrades were often not very stable or not fully compatible.",
"title": "Pin-compatible upgrades"
},
{
"paragraph_id": 24,
"text": "Original version, released in October 1985. The 16 MHz version was available for 299 USD in quantities of 100. The 20 MHz version was available for US$599 in quantities of 100. The 33 Mhz version was available on April 10th, 1989.",
"title": "Models and variants"
},
{
"paragraph_id": 25,
"text": "The military version was made using the CHMOS III process technology. It was made to withstand 105 Rads (Si) or greater. It was available for US$945 each in quantities of 100.",
"title": "Models and variants"
},
{
"paragraph_id": 26,
"text": "In 1988, Intel introduced the 80386SX, most often referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus, mainly intended for lower-cost PCs aimed at the home, educational, and small-business markets, while the 386DX remained the high-end variant used in workstations, servers, and other demanding tasks. The CPU remained fully 32-bit internally, but the 16-bit bus was intended to simplify circuit-board layout and reduce total cost. The 16-bit bus simplified designs but hampered performance. Only 24 pins were connected to the address bus, therefore limiting addressing to 16 MB, but this was not a critical constraint at the time. Performance differences were due not only to differing data-bus widths, but also due to performance-enhancing cache memories often employed on boards using the original chip.",
"title": "Models and variants"
},
{
"paragraph_id": 27,
"text": "The original 80386 was subsequently renamed i386DX to avoid confusion. However, Intel subsequently used the \"DX\" suffix to refer to the floating-point capability of the i486DX. The 387SX was an 80387 part that was compatible with the 386SX (i.e. with a 16-bit databus). The 386SX was packaged in a surface-mount QFP and sometimes offered in a socket to allow for an upgrade.",
"title": "Models and variants"
},
{
"paragraph_id": 28,
"text": "The 16 MHz 386SX contains the 100-lead BQFP. It was available for USD $165 in quantites of 1000. It has the performance of 2.5 to 3 MIPS as well. The low-power version was available on April 10, 1989. This version that uses 20 to 30 percent less power and has higher operating temperature up to 100°C than the regular version.",
"title": "Models and variants"
},
{
"paragraph_id": 29,
"text": "The 80386SL was introduced as a power-efficient version for laptop computers. The processor offered several power-management options (e.g. SMM), as well as different \"sleep\" modes to conserve battery power. It also contained support for an external cache of 16 to 64 KB. The extra functions and circuit implementation techniques caused this variant to have over 3 times as many transistors as the i386DX. The i386SL was first available at 20 MHz clock speed, with the 25 MHz model later added.",
"title": "Models and variants"
},
{
"paragraph_id": 30,
"text": "Dave Vannier, the chief architect designed this microprocessor. It took them two years to complete this design since it uses the existing 386 architecture to implement. That assist with advanced computer-aided design tools which includes a complete simulation of system board. This die contains the 386 CPU core, AT Bus Controller, Memory Controller, Internal Bus Controller, Cache Control Logic along with Cache Tag SRAM and Clock. This CPU contains 855,000 transistors using one-micron CHMOS IV technology. It was available for USD $176 in 1,000 unit in quantities.",
"title": "Models and variants"
},
{
"paragraph_id": 31,
"text": "A specially packaged Intel 486DX and a dummy floating-point unit (FPU) designed as pin-compatible replacements for an i386 processor and i387 FPU.",
"title": "Models and variants"
},
{
"paragraph_id": 32,
"text": "This was an embedded version of the 80386SX which did not support real mode and paging in the MMU.",
"title": "Models and variants"
},
{
"paragraph_id": 33,
"text": "System and power management and built in peripheral and support functions: Two 82C59A interrupt controllers; Timer, Counter (3 channels); Asynchronous SIO (2 channels); Synchronous SIO (1 channel); Watchdog timer (Hardware/Software); PIO. Usable with 80387SX or i387SL FPUs.",
"title": "Models and variants"
},
{
"paragraph_id": 34,
"text": "Transparent power management mode, integrated MMU and TTL compatible inputs (only 386SXSA). Usable with i387SX or i387SL FPUs.",
"title": "Models and variants"
},
{
"paragraph_id": 35,
"text": "Transparent power management mode and integrated MMU. Usable with i387SX or i387SL FPUs.",
"title": "Models and variants"
},
{
"paragraph_id": 36,
"text": "Windows 95 was the only entry in the Windows 9x series to officially support the 386, requiring at least a 386DX, though a 486 or better was recommended; Windows 98 requires a 486DX or higher. In the Windows NT family, Windows NT 3.51 was the last version with 386 support.",
"title": "Obsolescence"
},
{
"paragraph_id": 37,
"text": "Debian GNU/Linux dropped 386 support with the release of 3.1 (Sarge) in 2005 and completely removed support in 2007 with 4.0 (Etch). Citing the maintenance burden around SMP primitives, the Linux kernel developers cut support from the development codebase in December 2012, later released as kernel version 3.8.",
"title": "Obsolescence"
},
{
"paragraph_id": 38,
"text": "Among the BSDs, FreeBSD's 5.x releases were the last to support the 386; support for the 386SX was cut with release 5.2, while the remaining 386 support was removed with the 6.0 release in 2005. OpenBSD removed 386 support with version 4.2 (2007), DragonFly BSD with release 1.12 (2008), and NetBSD with the 5.0 release (2009).",
"title": "Obsolescence"
}
]
| The Intel 386, originally released as 80386 and later renamed i386, is a 32-bit microprocessor introduced in 1985. The first versions had 275,000 transistors and were the central processing unit (CPU) of many workstations and high-end personal computers of the time. The 20 MHz version operates at 4–5 MIPS. It also performs between 8,000 and 9,000 Dhrystones per second. The 25 MHz 386 version was clocking 7 MIPS. A 33 MHz 80386 was reportedly measured to operate at about 11.4 MIPS. At that same speed, it has the performance of 8 VAX MIPS. These processors were running about 4.4 clocks per instruction. Development of i386 technology began in 1982 under the internal name of P3. The tape-out of the 80386 development was finalized in July 1985. The 80386 was introduced as pre-production samples for software development workstations in October 1985. Manufacturing of the chips in significant quantities commenced in June 1986, along with the first plug-in device that allowed existing 80286-based computers to be upgraded to the 386, the Translator 386 by American Computer and Peripheral. Mainboards for 80386-based computer systems were cumbersome and expensive at first, but manufacturing was justified upon the 80386's mainstream adoption. The first personal computer to make use of the 80386 was the Deskpro 386, designed and manufactured by Compaq; this marked the first time a fundamental component in the IBM PC compatible de facto standard was updated by a company other than IBM. In May 2006, Intel announced that i386 production would stop at the end of September 2007. Although it had long been obsolete as a personal computer CPU, Intel and others had continued making the chip for embedded systems. Such systems using an i386 or one of many derivatives are common in aerospace technology and electronic musical instruments, among others. Some mobile phones also used the i386 processor, such as the BlackBerry 950 and Nokia 9000 Communicator. Linux continued to support i386 processors until December 11, 2012, when the kernel cut 386-specific instructions in version 3.8. The 32-bit i386 can correctly execute most code intended for the earlier 16-bit processors such as 8086 and 80286 that were ubiquitous in early PCs. As the original implementation of the 32-bit extension of the 80286 architecture, the i386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the i386 architecture, x86, or IA-32, depending on context. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386. | 2001-09-13T12:50:37Z | 2023-12-26T13:11:42Z | [
"Template:Intel processors",
"Template:Short description",
"Template:Use mdy dates",
"Template:Main",
"Template:Cite journal",
"Template:Clear",
"Template:Anchor",
"Template:Reflist",
"Template:About",
"Template:Lowercase title",
"Template:Infobox CPU",
"Template:Citation needed",
"Template:Cite web",
"Template:Cite book",
"Template:Authority control",
"Template:Redirect",
"Template:Efn",
"Template:Notelist",
"Template:Cite magazine"
]
| https://en.wikipedia.org/wiki/I386 |
15,072 | Instruction register | In computing, the instruction register (IR) or current instruction register (CIR) is the part of a CPU's control unit that holds the instruction currently being executed or decoded. In simple processors, each instruction to be executed is loaded into the instruction register, which holds it while it is decoded, prepared and ultimately executed, which can take several steps.
Some of the complicated processors use a pipeline of instruction registers where each stage of the pipeline does part of the decoding, preparation or execution and then passes it to the next stage for its step. Modern processors can even do some of the steps out of order as decoding on several instructions is done in parallel.
Decoding the op-code in the instruction register includes determining the instruction, determining where its operands are in memory, retrieving the operands from memory, allocating processor resources to execute the command (in super scalar processors), etc.
The output of the IR is available to control circuits, which generate the timing signals that control the various processing elements involved in executing the instruction.
In the instruction cycle, the instruction is loaded into the instruction register after the processor fetches it from the memory location pointed to by the program counter. | [
{
"paragraph_id": 0,
"text": "In computing, the instruction register (IR) or current instruction register (CIR) is the part of a CPU's control unit that holds the instruction currently being executed or decoded. In simple processors, each instruction to be executed is loaded into the instruction register, which holds it while it is decoded, prepared and ultimately executed, which can take several steps.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some of the complicated processors use a pipeline of instruction registers where each stage of the pipeline does part of the decoding, preparation or execution and then passes it to the next stage for its step. Modern processors can even do some of the steps out of order as decoding on several instructions is done in parallel.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Decoding the op-code in the instruction register includes determining the instruction, determining where its operands are in memory, retrieving the operands from memory, allocating processor resources to execute the command (in super scalar processors), etc.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The output of the IR is available to control circuits, which generate the timing signals that control the various processing elements involved in executing the instruction.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the instruction cycle, the instruction is loaded into the instruction register after the processor fetches it from the memory location pointed to by the program counter.",
"title": ""
}
]
| In computing, the instruction register (IR) or current instruction register (CIR) is the part of a CPU's control unit that holds the instruction currently being executed or decoded. In simple processors, each instruction to be executed is loaded into the instruction register, which holds it while it is decoded, prepared and ultimately executed, which can take several steps. Some of the complicated processors use a pipeline of instruction registers where each stage of the pipeline does part of the decoding, preparation or execution and then passes it to the next stage for its step. Modern processors can even do some of the steps out of order as decoding on several instructions is done in parallel. Decoding the op-code in the instruction register includes determining the instruction, determining where its operands are in memory, retrieving the operands from memory, allocating processor resources to execute the command, etc. The output of the IR is available to control circuits, which generate the timing signals that control the various processing elements involved in executing the instruction. In the instruction cycle, the instruction is loaded into the instruction register after the processor fetches it from the memory location pointed to by the program counter. | 2022-02-09T18:41:15Z | [
"Template:Short description",
"Template:Use American English",
"Template:Reflist",
"Template:Cite book",
"Template:Computer hardware stub",
"Template:Processor technologies"
]
| https://en.wikipedia.org/wiki/Instruction_register |
|
15,073 | Lists of islands | This is a list of the lists of islands in the world grouped by country, by continent, by body of water, and by other classifications. For rank-order lists, see the other lists of islands below.
By ocean:
By other bodies of water: | [
{
"paragraph_id": 0,
"text": "This is a list of the lists of islands in the world grouped by country, by continent, by body of water, and by other classifications. For rank-order lists, see the other lists of islands below.",
"title": ""
},
{
"paragraph_id": 1,
"text": "",
"title": "Lists of islands by body of water"
},
{
"paragraph_id": 2,
"text": "By ocean:",
"title": "Lists of islands by body of water"
},
{
"paragraph_id": 3,
"text": "By other bodies of water:",
"title": "Lists of islands by body of water"
}
]
| This is a list of the lists of islands in the world grouped by country, by continent, by body of water, and by other classifications. For rank-order lists, see the other lists of islands below. | 2001-10-28T07:23:45Z | 2023-12-02T21:00:13Z | [
"Template:Short description",
"Template:Columns-list",
"Template:Portal",
"Template:List of lists",
"Template:Europe topic",
"Template:Anchor",
"Template:Africa topic",
"Template:Asia topic",
"Template:North America topic",
"Template:Oceania topic",
"Template:South America topic"
]
| https://en.wikipedia.org/wiki/Lists_of_islands |
15,075 | INTERCAL | The Compiler Language With No Pronounceable Acronym (INTERCAL) is an esoteric programming language that was created as a parody by Don Woods and James M. Lyon [ru], two Princeton University students, in 1972. It satirizes aspects of the various programming languages at the time, as well as the proliferation of proposed language constructs and notations in the 1960s.
There are two maintained implementations of INTERCAL dialects: C-INTERCAL (created in 1990), maintained by Eric S. Raymond and Alex Smith, and CLC-INTERCAL, maintained by Claudio Calvelli.
According to the original manual by the authors,
The full name of the compiler is "Compiler Language With No Pronounceable Acronym", which is, for obvious reasons, abbreviated "INTERCAL".
The original Princeton implementation used punched cards and the EBCDIC character set. To allow INTERCAL to run on computers using ASCII, substitutions for two characters had to be made: $ substituted for ¢ as the mingle operator, "represent[ing] the increasing cost of software in relation to hardware", and ? was substituted for ⊻ as the unary exclusive-or operator to "correctly express the average person's reaction on first encountering exclusive-or". In recent versions of C-INTERCAL, the older operators are supported as alternatives; INTERCAL programs may now be encoded in ASCII, Latin-1, or UTF-8.
C-INTERCAL swaps the major and minor version numbers, compared to tradition. The HISTORY file shows releases starting at version 0.3 and as of May 2020 having progressed to 0.31, but containing 1.26 between 0.26 and 0.27.
CLC-INTERCAL version numbering scheme was traditional until version 0.06, when it changed to the scheme documented in the README file, which says:
* The term "version" has been replaced by "perversion" for correctness
* The perversion number consists of a floating-point number with independent signs for the integer and fractional part. Negative fractions indicate pre-escapes (so 1.-94 means "94 pre-escapes to go before 1.00". Or you can just add the numbers together and get 0.06, which is entirely a coincidence since 0.06 is not being developed)
* The fractional part of a perversion number can be integer or floating point, with a similar meaning for the parts. The current pre-escape is 1.-94.-2 which means "2 pre-pre-escapes to go before pre-escape 1.-94".
INTERCAL was intended to be completely different from all other computer languages. Common operations in other languages have cryptic and redundant syntax in INTERCAL. From the INTERCAL Reference Manual:
It is a well-known and oft-demonstrated fact that a person whose work is incomprehensible is held in high esteem. For example, if one were to state that the simplest way to store a value of 65536 in a 32-bit INTERCAL variable is:
any sensible programmer would say that that was absurd. Since this is indeed the simplest method, the programmer would be made to look foolish in front of his boss, who would of course happen to turn up, as bosses are expected to do. The effect would be no less devastating for the programmer having been correct.
INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.
Despite the language's intentionally obtuse and wordy syntax, INTERCAL is nevertheless Turing-complete: given enough memory, INTERCAL can solve any problem that a Universal Turing machine can solve. Most implementations of INTERCAL do this very slowly, however. A Sieve of Eratosthenes benchmark, computing all prime numbers less than 65536, was tested on a Sun SPARCstation 1 in 1992. In C, it took less than half a second; the same program in INTERCAL took over seventeen hours.
The INTERCAL Reference Manual contains many paradoxical, nonsensical, or otherwise humorous instructions:
Caution! Under no circumstances confuse the mesh with the interleave operator, except under confusing circumstances!
The manual also contains a "tonsil", as explained in this footnote: "4) Since all other reference manuals have appendices, it was decided that the INTERCAL manual should contain some other type of removable organ."
The INTERCAL manual gives unusual names to all non-alphanumeric ASCII characters: single and double quotes are 'sparks' and "rabbit ears" respectively. (The exception is the ampersand: as the Jargon File states, "what could be sillier?") The assignment operator, represented as an equals sign (INTERCAL's "half mesh") in many other programming languages, is in INTERCAL a left-arrow, <-, made up of an "angle" and a "worm", obviously read as "gets".
Input (using the WRITE IN instruction) and output (using the READ OUT instruction) do not use the usual formats; in INTERCAL-72, WRITE IN inputs a number written out as digits in English (such as SIX FIVE FIVE THREE FIVE), and READ OUT outputs it in "butchered" Roman numerals. More recent versions have their own I/O systems.
Comments can be achieved by using the inverted statement identifiers involving NOT or N'T; these cause lines to be initially ABSTAINed from so that they have no effect. (A line can be ABSTAINed from even if it doesn't have valid syntax; syntax errors happen at runtime, and only then when the line is un-ABSTAINed.)
INTERCAL-72 (the original version of INTERCAL) had only four data types: the 16-bit integer (represented with a ., called a "spot"), the 32-bit integer (:, a "twospot"), the array of 16-bit integers (,, a "tail"), and the array of 32-bit integers (;, a "hybrid"). There are 65535 available variables of each type, numbered from .1 to .65535 for 16-bit integers, for instance. However, each of these variables has its own stack on which it can be pushed and popped (STASHed and RETRIEVEd, in INTERCAL terminology), increasing the possible complexity of data structures. More modern versions of INTERCAL have by and large kept the same data structures, with appropriate modifications; TriINTERCAL, which modifies the radix with which numbers are represented, can use a 10-trit type rather than a 16-bit type, and CLC-INTERCAL implements many of its own data structures, such as "classes and lectures", by making the basic data types store more information rather than adding new types. Arrays are dimensioned by assigning to them as if they were a scalar variable. Constants can also be used, and are represented by a # ("mesh") followed by the constant itself, written as a decimal number; only integer constants from 0 to 65535 are supported.
There are only five operators in INTERCAL-72. Implementations vary in which characters represent which operation, and many accept more than one character, so more than one possibility is given for many of the operators.
Contrary to most other languages, AND, OR, and XOR are unary operators, which work on consecutive bits of their argument; the most significant bit of the result is the operator applied to the least significant and most significant bits of the input, the second-most-significant bit of the result is the operator applied to the most and second-most significant bits, the third-most-significant bit of the result is the operator applied to the second-most and third-most bits, and so on. The operator is placed between the punctuation mark specifying a variable name or constant and the number that specifies which variable it is, or just inside grouping marks (i.e. one character later than it would be in programming languages like C.) SELECT and INTERLEAVE (which is also known as MINGLE) are infix binary operators; SELECT takes the bits of its first operand that correspond to "1" bits of its second operand and removes the bits that correspond to "0" bits, shifting towards the least significant bit and padding with zeroes (so 51 (110011 in binary) SELECT 21 (10101 in binary) is 5 (101 in binary)); MINGLE alternates bits from its first and second operands (in such a way that the least significant bit of its second operand is the least significant bit of the result). There is no operator precedence; grouping marks must be used to disambiguate the precedence where it would otherwise be ambiguous (the grouping marks available are ' ("spark"), which matches another spark, and " ("rabbit ears"), which matches another rabbit ears; the programmer is responsible for using these in such a way that they make the expression unambiguous).
INTERCAL statements all start with a "statement identifier"; in INTERCAL-72, this can be DO, PLEASE, or PLEASE DO, all of which mean the same to the program (but using one of these too heavily causes the program to be rejected, an undocumented feature in INTERCAL-72 that was mentioned in the C-INTERCAL manual), or an inverted form (with NOT or N'T appended to the identifier). Backtracking INTERCAL, a modern variant, also allows variants using MAYBE (possibly combined with PLEASE or DO) as a statement identifier, which introduces a choice-point. Before the identifier, an optional line number (an integer enclosed in parentheses) can be given; after the identifier, a percent chance of the line executing can be given in the format %50, which defaults to 100%.
In INTERCAL-72, the main control structures are NEXT, RESUME, and FORGET. DO (line) NEXT branches to the line specified, remembering the next line that would be executed if it weren't for the NEXT on a call stack (other identifiers than DO can be used on any statement, DO is given as an example); DO FORGET expression removes expression entries from the top of the call stack (this is useful to avoid the error that otherwise happens when there are more than 80 entries), and DO RESUME expression removes expression entries from the call stack and jumps to the last line remembered.
C-INTERCAL also provides the COME FROM instruction, written DO COME FROM (line); CLC-INTERCAL and the most recent C-INTERCAL versions also provide computed COME FROM (DO COME FROM expression) and NEXT FROM, which is like COME FROM but also saves a return address on the NEXT STACK.
Alternative ways to affect program flow, originally available in INTERCAL-72, are to use the IGNORE and REMEMBER instructions on variables (which cause writes to the variable to be silently ignored and to take effect again, so that instructions can be disabled by causing them to have no effect), and the ABSTAIN and REINSTATE instructions on lines or on types of statement, causing the lines to have no effect or to have an effect again respectively.
The traditional "Hello, world!" program demonstrates how different INTERCAL is from standard programming languages. In C, it could read as follows:
The equivalent program in C-INTERCAL is longer and harder to read:
The original Woods–Lyon INTERCAL was very limited in its input/output capabilities: the only acceptable input were numbers with the digits spelled out, and the only output was an extended version of Roman numerals.
The C-INTERCAL reimplementation, being available on the Internet, has made the language more popular with devotees of esoteric programming languages. The C-INTERCAL dialect has a few differences from original INTERCAL and introduced a few new features, such as a COME FROM statement and a means of doing text I/O based on the Turing Text Model.
The authors of C-INTERCAL also created the TriINTERCAL variant, based on the Ternary numeral system and generalizing INTERCAL's set of operators.
A more recent variant is Threaded Intercal, which extends the functionality of COME FROM to support multithreading.
CLC-INTERCAL has a library called INTERNET for networking functionality including being an INTERCAL server, and also includes features such as Quantum Intercal, which enables multi-value calculations in a way purportedly ready for the first quantum computers.
In early 2017 a .NET Implementation targeting the .NET Framework appeared on GitHub. This implementation supports the creation of standalone binary libraries and interop with other programming languages.
In the article "A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics", INTERCAL is described under the heading "Abandon all sanity, ye who enter here: INTERCAL". The compiler and commenting strategy are among the "weird" features described:
The compiler, appropriately named "ick", continues the parody. Anything the compiler can't understand, which in a normal language would result in a compilation error, is just skipped. This "forgiving" feature makes finding bugs very difficult; it also introduces a unique system for adding program comments. The programmer merely inserts non-compileable text anywhere in the program, being careful not to accidentally embed a bit of valid code in the middle of their comment.
In "Technomasochism", Lev Bratishenko characterizes the INTERCAL compiler as a dominatrix:
If PLEASE was not encountered often enough, the program would be rejected; that is, ignored without explanation by the compiler. Too often and it would still be rejected, this time for sniveling. Combined with other words that are rarely used in programming languages but appear as statements in INTERCAL, the code reads like someone pleading.
The Nitrome Enjoyment System, a fictional video game console created by British indie game developer Nitrome, has games which are programmed in INTERCAL. | [
{
"paragraph_id": 0,
"text": "The Compiler Language With No Pronounceable Acronym (INTERCAL) is an esoteric programming language that was created as a parody by Don Woods and James M. Lyon [ru], two Princeton University students, in 1972. It satirizes aspects of the various programming languages at the time, as well as the proliferation of proposed language constructs and notations in the 1960s.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are two maintained implementations of INTERCAL dialects: C-INTERCAL (created in 1990), maintained by Eric S. Raymond and Alex Smith, and CLC-INTERCAL, maintained by Claudio Calvelli.",
"title": ""
},
{
"paragraph_id": 2,
"text": "According to the original manual by the authors,",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The full name of the compiler is \"Compiler Language With No Pronounceable Acronym\", which is, for obvious reasons, abbreviated \"INTERCAL\".",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The original Princeton implementation used punched cards and the EBCDIC character set. To allow INTERCAL to run on computers using ASCII, substitutions for two characters had to be made: $ substituted for ¢ as the mingle operator, \"represent[ing] the increasing cost of software in relation to hardware\", and ? was substituted for ⊻ as the unary exclusive-or operator to \"correctly express the average person's reaction on first encountering exclusive-or\". In recent versions of C-INTERCAL, the older operators are supported as alternatives; INTERCAL programs may now be encoded in ASCII, Latin-1, or UTF-8.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "C-INTERCAL swaps the major and minor version numbers, compared to tradition. The HISTORY file shows releases starting at version 0.3 and as of May 2020 having progressed to 0.31, but containing 1.26 between 0.26 and 0.27.",
"title": "Version numbers"
},
{
"paragraph_id": 6,
"text": "CLC-INTERCAL version numbering scheme was traditional until version 0.06, when it changed to the scheme documented in the README file, which says:",
"title": "Version numbers"
},
{
"paragraph_id": 7,
"text": "* The term \"version\" has been replaced by \"perversion\" for correctness",
"title": "Version numbers"
},
{
"paragraph_id": 8,
"text": "* The perversion number consists of a floating-point number with independent signs for the integer and fractional part. Negative fractions indicate pre-escapes (so 1.-94 means \"94 pre-escapes to go before 1.00\". Or you can just add the numbers together and get 0.06, which is entirely a coincidence since 0.06 is not being developed)",
"title": "Version numbers"
},
{
"paragraph_id": 9,
"text": "* The fractional part of a perversion number can be integer or floating point, with a similar meaning for the parts. The current pre-escape is 1.-94.-2 which means \"2 pre-pre-escapes to go before pre-escape 1.-94\".",
"title": "Version numbers"
},
{
"paragraph_id": 10,
"text": "INTERCAL was intended to be completely different from all other computer languages. Common operations in other languages have cryptic and redundant syntax in INTERCAL. From the INTERCAL Reference Manual:",
"title": "Details"
},
{
"paragraph_id": 11,
"text": "It is a well-known and oft-demonstrated fact that a person whose work is incomprehensible is held in high esteem. For example, if one were to state that the simplest way to store a value of 65536 in a 32-bit INTERCAL variable is:",
"title": "Details"
},
{
"paragraph_id": 12,
"text": "any sensible programmer would say that that was absurd. Since this is indeed the simplest method, the programmer would be made to look foolish in front of his boss, who would of course happen to turn up, as bosses are expected to do. The effect would be no less devastating for the programmer having been correct.",
"title": "Details"
},
{
"paragraph_id": 13,
"text": "INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as \"READ OUT\", \"IGNORE\", \"FORGET\", and modifiers such as \"PLEASE\". This last keyword provides two reasons for the program's rejection by the compiler: if \"PLEASE\" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if it appears too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented.",
"title": "Details"
},
{
"paragraph_id": 14,
"text": "Despite the language's intentionally obtuse and wordy syntax, INTERCAL is nevertheless Turing-complete: given enough memory, INTERCAL can solve any problem that a Universal Turing machine can solve. Most implementations of INTERCAL do this very slowly, however. A Sieve of Eratosthenes benchmark, computing all prime numbers less than 65536, was tested on a Sun SPARCstation 1 in 1992. In C, it took less than half a second; the same program in INTERCAL took over seventeen hours.",
"title": "Details"
},
{
"paragraph_id": 15,
"text": "The INTERCAL Reference Manual contains many paradoxical, nonsensical, or otherwise humorous instructions:",
"title": "Documentation"
},
{
"paragraph_id": 16,
"text": "Caution! Under no circumstances confuse the mesh with the interleave operator, except under confusing circumstances!",
"title": "Documentation"
},
{
"paragraph_id": 17,
"text": "The manual also contains a \"tonsil\", as explained in this footnote: \"4) Since all other reference manuals have appendices, it was decided that the INTERCAL manual should contain some other type of removable organ.\"",
"title": "Documentation"
},
{
"paragraph_id": 18,
"text": "The INTERCAL manual gives unusual names to all non-alphanumeric ASCII characters: single and double quotes are 'sparks' and \"rabbit ears\" respectively. (The exception is the ampersand: as the Jargon File states, \"what could be sillier?\") The assignment operator, represented as an equals sign (INTERCAL's \"half mesh\") in many other programming languages, is in INTERCAL a left-arrow, <-, made up of an \"angle\" and a \"worm\", obviously read as \"gets\".",
"title": "Documentation"
},
{
"paragraph_id": 19,
"text": "Input (using the WRITE IN instruction) and output (using the READ OUT instruction) do not use the usual formats; in INTERCAL-72, WRITE IN inputs a number written out as digits in English (such as SIX FIVE FIVE THREE FIVE), and READ OUT outputs it in \"butchered\" Roman numerals. More recent versions have their own I/O systems.",
"title": "Syntax"
},
{
"paragraph_id": 20,
"text": "Comments can be achieved by using the inverted statement identifiers involving NOT or N'T; these cause lines to be initially ABSTAINed from so that they have no effect. (A line can be ABSTAINed from even if it doesn't have valid syntax; syntax errors happen at runtime, and only then when the line is un-ABSTAINed.)",
"title": "Syntax"
},
{
"paragraph_id": 21,
"text": "INTERCAL-72 (the original version of INTERCAL) had only four data types: the 16-bit integer (represented with a ., called a \"spot\"), the 32-bit integer (:, a \"twospot\"), the array of 16-bit integers (,, a \"tail\"), and the array of 32-bit integers (;, a \"hybrid\"). There are 65535 available variables of each type, numbered from .1 to .65535 for 16-bit integers, for instance. However, each of these variables has its own stack on which it can be pushed and popped (STASHed and RETRIEVEd, in INTERCAL terminology), increasing the possible complexity of data structures. More modern versions of INTERCAL have by and large kept the same data structures, with appropriate modifications; TriINTERCAL, which modifies the radix with which numbers are represented, can use a 10-trit type rather than a 16-bit type, and CLC-INTERCAL implements many of its own data structures, such as \"classes and lectures\", by making the basic data types store more information rather than adding new types. Arrays are dimensioned by assigning to them as if they were a scalar variable. Constants can also be used, and are represented by a # (\"mesh\") followed by the constant itself, written as a decimal number; only integer constants from 0 to 65535 are supported.",
"title": "Syntax"
},
{
"paragraph_id": 22,
"text": "There are only five operators in INTERCAL-72. Implementations vary in which characters represent which operation, and many accept more than one character, so more than one possibility is given for many of the operators.",
"title": "Syntax"
},
{
"paragraph_id": 23,
"text": "Contrary to most other languages, AND, OR, and XOR are unary operators, which work on consecutive bits of their argument; the most significant bit of the result is the operator applied to the least significant and most significant bits of the input, the second-most-significant bit of the result is the operator applied to the most and second-most significant bits, the third-most-significant bit of the result is the operator applied to the second-most and third-most bits, and so on. The operator is placed between the punctuation mark specifying a variable name or constant and the number that specifies which variable it is, or just inside grouping marks (i.e. one character later than it would be in programming languages like C.) SELECT and INTERLEAVE (which is also known as MINGLE) are infix binary operators; SELECT takes the bits of its first operand that correspond to \"1\" bits of its second operand and removes the bits that correspond to \"0\" bits, shifting towards the least significant bit and padding with zeroes (so 51 (110011 in binary) SELECT 21 (10101 in binary) is 5 (101 in binary)); MINGLE alternates bits from its first and second operands (in such a way that the least significant bit of its second operand is the least significant bit of the result). There is no operator precedence; grouping marks must be used to disambiguate the precedence where it would otherwise be ambiguous (the grouping marks available are ' (\"spark\"), which matches another spark, and \" (\"rabbit ears\"), which matches another rabbit ears; the programmer is responsible for using these in such a way that they make the expression unambiguous).",
"title": "Syntax"
},
{
"paragraph_id": 24,
"text": "INTERCAL statements all start with a \"statement identifier\"; in INTERCAL-72, this can be DO, PLEASE, or PLEASE DO, all of which mean the same to the program (but using one of these too heavily causes the program to be rejected, an undocumented feature in INTERCAL-72 that was mentioned in the C-INTERCAL manual), or an inverted form (with NOT or N'T appended to the identifier). Backtracking INTERCAL, a modern variant, also allows variants using MAYBE (possibly combined with PLEASE or DO) as a statement identifier, which introduces a choice-point. Before the identifier, an optional line number (an integer enclosed in parentheses) can be given; after the identifier, a percent chance of the line executing can be given in the format %50, which defaults to 100%.",
"title": "Syntax"
},
{
"paragraph_id": 25,
"text": "In INTERCAL-72, the main control structures are NEXT, RESUME, and FORGET. DO (line) NEXT branches to the line specified, remembering the next line that would be executed if it weren't for the NEXT on a call stack (other identifiers than DO can be used on any statement, DO is given as an example); DO FORGET expression removes expression entries from the top of the call stack (this is useful to avoid the error that otherwise happens when there are more than 80 entries), and DO RESUME expression removes expression entries from the call stack and jumps to the last line remembered.",
"title": "Syntax"
},
{
"paragraph_id": 26,
"text": "C-INTERCAL also provides the COME FROM instruction, written DO COME FROM (line); CLC-INTERCAL and the most recent C-INTERCAL versions also provide computed COME FROM (DO COME FROM expression) and NEXT FROM, which is like COME FROM but also saves a return address on the NEXT STACK.",
"title": "Syntax"
},
{
"paragraph_id": 27,
"text": "Alternative ways to affect program flow, originally available in INTERCAL-72, are to use the IGNORE and REMEMBER instructions on variables (which cause writes to the variable to be silently ignored and to take effect again, so that instructions can be disabled by causing them to have no effect), and the ABSTAIN and REINSTATE instructions on lines or on types of statement, causing the lines to have no effect or to have an effect again respectively.",
"title": "Syntax"
},
{
"paragraph_id": 28,
"text": "The traditional \"Hello, world!\" program demonstrates how different INTERCAL is from standard programming languages. In C, it could read as follows:",
"title": "Syntax"
},
{
"paragraph_id": 29,
"text": "The equivalent program in C-INTERCAL is longer and harder to read:",
"title": "Syntax"
},
{
"paragraph_id": 30,
"text": "The original Woods–Lyon INTERCAL was very limited in its input/output capabilities: the only acceptable input were numbers with the digits spelled out, and the only output was an extended version of Roman numerals.",
"title": "Dialects"
},
{
"paragraph_id": 31,
"text": "The C-INTERCAL reimplementation, being available on the Internet, has made the language more popular with devotees of esoteric programming languages. The C-INTERCAL dialect has a few differences from original INTERCAL and introduced a few new features, such as a COME FROM statement and a means of doing text I/O based on the Turing Text Model.",
"title": "Dialects"
},
{
"paragraph_id": 32,
"text": "The authors of C-INTERCAL also created the TriINTERCAL variant, based on the Ternary numeral system and generalizing INTERCAL's set of operators.",
"title": "Dialects"
},
{
"paragraph_id": 33,
"text": "A more recent variant is Threaded Intercal, which extends the functionality of COME FROM to support multithreading.",
"title": "Dialects"
},
{
"paragraph_id": 34,
"text": "CLC-INTERCAL has a library called INTERNET for networking functionality including being an INTERCAL server, and also includes features such as Quantum Intercal, which enables multi-value calculations in a way purportedly ready for the first quantum computers.",
"title": "Dialects"
},
{
"paragraph_id": 35,
"text": "In early 2017 a .NET Implementation targeting the .NET Framework appeared on GitHub. This implementation supports the creation of standalone binary libraries and interop with other programming languages.",
"title": "Dialects"
},
{
"paragraph_id": 36,
"text": "In the article \"A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics\", INTERCAL is described under the heading \"Abandon all sanity, ye who enter here: INTERCAL\". The compiler and commenting strategy are among the \"weird\" features described:",
"title": "Impact and discussion"
},
{
"paragraph_id": 37,
"text": "The compiler, appropriately named \"ick\", continues the parody. Anything the compiler can't understand, which in a normal language would result in a compilation error, is just skipped. This \"forgiving\" feature makes finding bugs very difficult; it also introduces a unique system for adding program comments. The programmer merely inserts non-compileable text anywhere in the program, being careful not to accidentally embed a bit of valid code in the middle of their comment.",
"title": "Impact and discussion"
},
{
"paragraph_id": 38,
"text": "In \"Technomasochism\", Lev Bratishenko characterizes the INTERCAL compiler as a dominatrix:",
"title": "Impact and discussion"
},
{
"paragraph_id": 39,
"text": "If PLEASE was not encountered often enough, the program would be rejected; that is, ignored without explanation by the compiler. Too often and it would still be rejected, this time for sniveling. Combined with other words that are rarely used in programming languages but appear as statements in INTERCAL, the code reads like someone pleading.",
"title": "Impact and discussion"
},
{
"paragraph_id": 40,
"text": "The Nitrome Enjoyment System, a fictional video game console created by British indie game developer Nitrome, has games which are programmed in INTERCAL.",
"title": "In popular culture"
}
]
| The Compiler Language With No Pronounceable Acronym (INTERCAL) is an esoteric programming language that was created as a parody by Don Woods and James M. Lyon, two Princeton University students, in 1972. It satirizes aspects of the various programming languages at the time, as well as the proliferation of proposed language constructs and notations in the 1960s. There are two maintained implementations of INTERCAL dialects: C-INTERCAL, maintained by Eric S. Raymond and Alex Smith, and CLC-INTERCAL, maintained by Claudio Calvelli. | 2001-03-16T21:03:29Z | 2023-11-12T15:26:42Z | [
"Template:Blockquote",
"Template:As of",
"Template:Reflist",
"Template:Commons category",
"Template:Short description",
"Template:Ill",
"Template:Quote",
"Template:Cite web",
"Template:Cite news",
"Template:Cite conference",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/INTERCAL |
15,076 | International Data Encryption Algorithm | In cryptography, the International Data Encryption Algorithm (IDEA), originally called Improved Proposed Encryption Standard (IPES), is a symmetric-key block cipher designed by James Massey of ETH Zurich and Xuejia Lai and was first described in 1991. The algorithm was intended as a replacement for the Data Encryption Standard (DES). IDEA is a minor revision of an earlier cipher Proposed Encryption Standard (PES).
The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name "IDEA" is also a trademark. The last patents expired in 2012, and IDEA is now patent-free and thus completely free for all uses.
IDEA was used in Pretty Good Privacy (PGP) v2.0 and was incorporated after the original cipher used in v1.0, BassOmatic, was found to be insecure. IDEA is an optional algorithm in the OpenPGP standard.
IDEA operates on 64-bit blocks using a 128-bit key and consists of a series of 8 identical transformations (a round, see the illustration) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups — modular addition and multiplication, and bitwise eXclusive OR (XOR) — which are algebraically "incompatible" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are:
After the 8 rounds comes a final “half-round”, the output transformation illustrated below (the swap of the middle two values cancels out the swap at the end of the last round, so that there is no net swap):
The overall structure of IDEA follows the Lai–Massey scheme. XOR is used for both subtraction and addition. IDEA uses a key-dependent half-round function. To work with 16-bit words (meaning 4 inputs instead of 2 for the 64-bit block size), IDEA uses the Lai–Massey scheme twice in parallel, with the two parallel round functions being interwoven with each other. To ensure sufficient diffusion, two of the sub-blocks are swapped after each round.
Each round uses 6 16-bit sub-keys, while the half-round uses 4, a total of 52 for 8.5 rounds. The first 8 sub-keys are extracted directly from the key, with K1 from the first round being the lower 16 bits; further groups of 8 keys are created by rotating the main key left 25 bits between each group of 8. This means that it is rotated less than once per round, on average, for a total of 6 rotations.
Decryption works like encryption, but the order of the round keys is inverted, and the subkeys for the odd rounds are inversed. For instance, the values of subkeys K1–K4 are replaced by the inverse of K49–K52 for the respective group operation, K5 and K6 of each group should be replaced by K47 and K48 for decryption.
The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. As of 2007, the best attack applied to all keys could break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds). Note that a "break" is any attack that requires less than 2 operations; the 6-round attack requires 2 known plaintexts and 2 operations.
Bruce Schneier thought highly of IDEA in 1996, writing: "In my opinion, it is the best and most secure block algorithm available to the public at this time." (Applied Cryptography, 2nd ed.) However, by 1999 he was no longer recommending IDEA due to the availability of faster algorithms, some progress in its cryptanalysis, and the issue of patents.
In 2011 full 8.5-round IDEA was broken using a meet-in-the-middle attack. Independently in 2012, full 8.5-round IDEA was broken using a narrow-bicliques attack, with a reduction of cryptographic strength of about 2 bits, similar to the effect of the previous bicliques attack on AES; however, this attack does not threaten the security of IDEA in practice.
The very simple key schedule makes IDEA subject to a class of weak keys; some keys containing a large number of 0 bits produce weak encryption. These are of little concern in practice, being sufficiently rare that they are unnecessary to avoid explicitly when generating keys randomly. A simple fix was proposed: XORing each subkey with a 16-bit constant, such as 0x0DAE.
Larger classes of weak keys were found in 2002.
This is still of negligible probability to be a concern to a randomly chosen key, and some of the problems are fixed by the constant XOR proposed earlier, but the paper is not certain if all of them are. A more comprehensive redesign of the IDEA key schedule may be desirable.
A patent application for IDEA was first filed in Switzerland (CH A 1690/90) on May 18, 1990, then an international patent application was filed under the Patent Cooperation Treaty on May 16, 1991. Patents were eventually granted in Austria, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, the United Kingdom, (European Patent Register entry for European patent no. 0482154, filed May 16, 1991, issued June 22, 1994 and expired May 16, 2011), the United States (U.S. Patent 5,214,703, issued May 25, 1993 and expired January 7, 2012) and Japan (JP 3225440, expired May 16, 2011).
MediaCrypt AG is now offering a successor to IDEA and focuses on its new cipher (official release in May 2005) IDEA NXT, which was previously called FOX. | [
{
"paragraph_id": 0,
"text": "In cryptography, the International Data Encryption Algorithm (IDEA), originally called Improved Proposed Encryption Standard (IPES), is a symmetric-key block cipher designed by James Massey of ETH Zurich and Xuejia Lai and was first described in 1991. The algorithm was intended as a replacement for the Data Encryption Standard (DES). IDEA is a minor revision of an earlier cipher Proposed Encryption Standard (PES).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name \"IDEA\" is also a trademark. The last patents expired in 2012, and IDEA is now patent-free and thus completely free for all uses.",
"title": ""
},
{
"paragraph_id": 2,
"text": "IDEA was used in Pretty Good Privacy (PGP) v2.0 and was incorporated after the original cipher used in v1.0, BassOmatic, was found to be insecure. IDEA is an optional algorithm in the OpenPGP standard.",
"title": ""
},
{
"paragraph_id": 3,
"text": "IDEA operates on 64-bit blocks using a 128-bit key and consists of a series of 8 identical transformations (a round, see the illustration) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups — modular addition and multiplication, and bitwise eXclusive OR (XOR) — which are algebraically \"incompatible\" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are:",
"title": "Operation"
},
{
"paragraph_id": 4,
"text": "After the 8 rounds comes a final “half-round”, the output transformation illustrated below (the swap of the middle two values cancels out the swap at the end of the last round, so that there is no net swap):",
"title": "Operation"
},
{
"paragraph_id": 5,
"text": "The overall structure of IDEA follows the Lai–Massey scheme. XOR is used for both subtraction and addition. IDEA uses a key-dependent half-round function. To work with 16-bit words (meaning 4 inputs instead of 2 for the 64-bit block size), IDEA uses the Lai–Massey scheme twice in parallel, with the two parallel round functions being interwoven with each other. To ensure sufficient diffusion, two of the sub-blocks are swapped after each round.",
"title": "Operation"
},
{
"paragraph_id": 6,
"text": "Each round uses 6 16-bit sub-keys, while the half-round uses 4, a total of 52 for 8.5 rounds. The first 8 sub-keys are extracted directly from the key, with K1 from the first round being the lower 16 bits; further groups of 8 keys are created by rotating the main key left 25 bits between each group of 8. This means that it is rotated less than once per round, on average, for a total of 6 rotations.",
"title": "Operation"
},
{
"paragraph_id": 7,
"text": "Decryption works like encryption, but the order of the round keys is inverted, and the subkeys for the odd rounds are inversed. For instance, the values of subkeys K1–K4 are replaced by the inverse of K49–K52 for the respective group operation, K5 and K6 of each group should be replaced by K47 and K48 for decryption.",
"title": "Operation"
},
{
"paragraph_id": 8,
"text": "The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. As of 2007, the best attack applied to all keys could break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds). Note that a \"break\" is any attack that requires less than 2 operations; the 6-round attack requires 2 known plaintexts and 2 operations.",
"title": "Security"
},
{
"paragraph_id": 9,
"text": "Bruce Schneier thought highly of IDEA in 1996, writing: \"In my opinion, it is the best and most secure block algorithm available to the public at this time.\" (Applied Cryptography, 2nd ed.) However, by 1999 he was no longer recommending IDEA due to the availability of faster algorithms, some progress in its cryptanalysis, and the issue of patents.",
"title": "Security"
},
{
"paragraph_id": 10,
"text": "In 2011 full 8.5-round IDEA was broken using a meet-in-the-middle attack. Independently in 2012, full 8.5-round IDEA was broken using a narrow-bicliques attack, with a reduction of cryptographic strength of about 2 bits, similar to the effect of the previous bicliques attack on AES; however, this attack does not threaten the security of IDEA in practice.",
"title": "Security"
},
{
"paragraph_id": 11,
"text": "The very simple key schedule makes IDEA subject to a class of weak keys; some keys containing a large number of 0 bits produce weak encryption. These are of little concern in practice, being sufficiently rare that they are unnecessary to avoid explicitly when generating keys randomly. A simple fix was proposed: XORing each subkey with a 16-bit constant, such as 0x0DAE.",
"title": "Security"
},
{
"paragraph_id": 12,
"text": "Larger classes of weak keys were found in 2002.",
"title": "Security"
},
{
"paragraph_id": 13,
"text": "This is still of negligible probability to be a concern to a randomly chosen key, and some of the problems are fixed by the constant XOR proposed earlier, but the paper is not certain if all of them are. A more comprehensive redesign of the IDEA key schedule may be desirable.",
"title": "Security"
},
{
"paragraph_id": 14,
"text": "A patent application for IDEA was first filed in Switzerland (CH A 1690/90) on May 18, 1990, then an international patent application was filed under the Patent Cooperation Treaty on May 16, 1991. Patents were eventually granted in Austria, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, the United Kingdom, (European Patent Register entry for European patent no. 0482154, filed May 16, 1991, issued June 22, 1994 and expired May 16, 2011), the United States (U.S. Patent 5,214,703, issued May 25, 1993 and expired January 7, 2012) and Japan (JP 3225440, expired May 16, 2011).",
"title": "Availability"
},
{
"paragraph_id": 15,
"text": "MediaCrypt AG is now offering a successor to IDEA and focuses on its new cipher (official release in May 2005) IDEA NXT, which was previously called FOX.",
"title": "Availability"
}
]
| In cryptography, the International Data Encryption Algorithm (IDEA), originally called Improved Proposed Encryption Standard (IPES), is a symmetric-key block cipher designed by James Massey of ETH Zurich and Xuejia Lai and was first described in 1991. The algorithm was intended as a replacement for the Data Encryption Standard (DES). IDEA is a minor revision of an earlier cipher Proposed Encryption Standard (PES). The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name "IDEA" is also a trademark. The last patents expired in 2012, and IDEA is now patent-free and thus completely free for all uses. IDEA was used in Pretty Good Privacy (PGP) v2.0 and was incorporated after the original cipher used in v1.0, BassOmatic, was found to be insecure. IDEA is an optional algorithm in the OpenPGP standard. | 2001-09-27T16:38:57Z | 2023-10-29T15:12:18Z | [
"Template:Reflist",
"Template:Citation",
"Template:Infobox block cipher",
"Template:Fontcolor",
"Template:As of",
"Template:EPO Register",
"Template:US patent",
"Template:Cite book",
"Template:Cite web",
"Template:Cite journal",
"Template:Cryptography navbox"
]
| https://en.wikipedia.org/wiki/International_Data_Encryption_Algorithm |
15,077 | Indoor rower | An indoor rower, or rowing machine, is a machine used to simulate the action of watercraft rowing for the purpose of exercise or training for rowing. Modern indoor rowers are often known as ergometers (colloquially erg or ergo) because they measure work performed by the rower (which can be measured in ergs). Indoor rowing has become established as a sport, drawing a competitive environment from around the world. The term "indoor rower" also refers to a participant in this sport.
Chabrias, an Athenian admiral of the 4th century BC, introduced the first rowing machines as supplemental military training devices. "To train inexperienced oarsmen, Chabrias built wooden rowing frames onshore where beginners could learn technique and timing before they went onboard ship."
Early rowing machines are known to have existed from the mid-1800s, a US patent being issued to W.B. Curtis in 1872 for a particular hydraulic-based damper design. Machines using linear pneumatic resistance were common around 1900—one of the most popular was the Narragansett hydraulic rower, manufactured in Rhode Island from around 1900–1960.
In the 1970s, the Gjessing-Nilson ergometer from Norway used a friction brake mechanism with industrial strapping applied over the broad rim of the flywheel. Weights hanging from the strap ensured that an adjustable and predictable friction could be calculated.
The first air resistance ergometers were introduced around 1980 by Repco. In 1981, Peter and Richard Dreissigacker, and Jonathan Williams, filed for U.S. patent protection, as joint inventors of a "Stationary Rowing Unit". The first commercial embodiment of the Concept2 "rowing ergometer" was the Model A, a fixed-frame sliding-seat design using a bicycle wheel with fins attached for air resistance. The Model B, introduced in 1986, introduced a solid cast flywheel and the first digital performance monitor, which proved revolutionary. This machine's capability of accurate calibration combined with easy transportability spawned the sport of competitive indoor rowing, and revolutionised training and selection procedures for watercraft rowing. Later models were the C (1993) and D (2003).
In 1995, Casper Rekers, a Dutch engineer, was granted a U.S. patent for a (US 5382210A) "Dynamically Balanced Rowing Simulator". This device differed from the prior art in that the flywheel and footrests are fixed to a carriage, the carriage being free to slide fore and aft on a rail or rails integral to the frame. The seat is also free to slide fore and aft on a rail or rails integral to the frame.
Modern indoor rowers have their resistance provided by a flywheel. Indoor rowers that utilise flywheel resistance can be categorised into two motion types. In both types, the rowing movement of the user causes the footrests and the seat to move further and closer apart in co-ordination with the user's stroke. The difference between the two types is in the movement, or absence of movement, of the footrests relative to ground.
The first type is characterised by the Dreissigacker/Williams device. With this type the flywheel and footrests are fixed to a stationary frame, and the seat is free to slide fore and aft on a rail or rails integral to the stationary frame. Therefore, during use, the seat moves relative to the footrests and also relative to ground, while the flywheel and footrests remain stationary relative to ground.
The second type is characterised by the Rekers device. With this type, both the seat and the footrests are free to slide fore and aft on a rail or rails integral to a stationary frame. Therefore, during use, the seat and the footrests move relative to each other, and both also move relative to ground.
Piston resistance comes from hydraulic cylinders that are attached to the handles of the rowing machine.
Braked flywheel resistance models comprise magnetic, air, and water resistance rowers.
Magnetic resistance models control resistance by means of permanent magnets or electromagnets.
Air resistance models use vanes on the flywheel to provide the flywheel braking needed to generate resistance.
Water resistance models consist of a paddle revolving in an enclosed tank of water.
Dual Resistance Rower is a professional fitness equipment with fan and magnetic brake resistance for a variety of intensity levels from warm-ups to HIIT intervals.
Sometimes, slides are placed underneath the machine, which allows the machine to move back and forth smoothly as if there were water beneath the rower. The slides can be connected in rows or columns so that rowers are forced to move together on the ergometer, similarly to the way they would match up their rhythm in a boat.
Indoor rowers usually also display estimates of rowing boat speed and energy used by the athlete.
Rowing is an example of a method of aerobic exercise, which has been observed to improve athletes' VO2 peak. Indoor rowing primarily works the cardiovascular systems with typical workouts consisting of steady pieces of 20–40 minutes.
The standard measurement of speed on an ergometer is generally known as the "split", or the amount of time in minutes and seconds required to travel 500 metres (1,600 ft) at the current pace. Other standard measurement units on the indoor rowing machine include calories and watts.
Although ergometer tests are used by rowing coaches to evaluate rowers and are part of athlete selection for many senior and junior national rowing teams, data suggests that "physiological and performance tests performed on a rowing ergometer are not good indicators of on-water performance".
Some standard indoor rower ergometer tests include: 250 m ergometer test, 2000 m ergometer test, 5 km ergometer test, 16 km ergometer test and the 30-minute ergometer test.
Rowing on an ergometer requires four basic phases to complete one stroke; the catch, the drive, the finish and the recovery. The catch is the initial part of the stroke. The drive is where the power from the rower is generated while the finish is the final part of the stroke. Then, the recovery is the initial phase to begin taking a new stroke. The phases repeat until a time duration or a distance is completed. At each stage of the stroke the back should remain in a neutral, flat position, pivoting at the hips to avoid injury.
Knees are bent with the shins in a vertical position. The back should be roughly parallel to the thigh without hyperflexion (leaning forward too far). The arms and shoulders should be extended forward and relaxed. The arms should be level.
The drive is initiated by a push and extension of the legs; the body remains in the catch posture at this point of the drive. As the legs continue to full extension, the hip angle opens and the rower engages the core to begin the motion of the body levering backward, adding to the work of the legs. When the legs are fully extended, the rower begins to pull the handle toward the chest with their arms, completings the stroke with the handle half way up the body and the forearms parallel to the ground.
The legs are at full extension and flat. The shoulders are slightly behind the pelvis, and the arms are in full contraction with the elbows bent and hands against the chest below the nipples. The back of the rower is still maintained in an upright posture and wrists should be flat.
The recovery is a slow slide back to the initial part of the stroke, it gives the rower time to recover from the previous stroke. During the recovery the actions are in reverse order of the drive. The recovery is initiated by the extensions of the arms until fully extended in front of the body. The torso is then engaged by pivoting at the hips to move the torso in front of the hips. Weight transfers from the back of the seat to the front of the seat at this time. When the hands come over the knees, the legs are bent at the knees, moving the slide towards the front of the machine. As the back becomes more parallel to the thighs, the recovery is completed when the shins are perpendicular to the ground. At this point the recovery transitions to the catch for the next stroke.
The first indoor rowing competition was held in Cambridge, Massachusetts, in February 1982 with participation of 96 on-water rowers who called themselves the "Charles River Association of Sculling Has-Beens", hence the acronym, "CRASH-B". The core events for indoor rowing competitions that are currently competed in at the World Rowing Indoor Championships are the individual 500m, individual 2000m, individual 1 hour, and 3-minute teams event. Events at other indoor rowing competitions include the mile and the 2500-meter.
Most competitions are organised into categories based on sex, age, and weight class. | [
{
"paragraph_id": 0,
"text": "An indoor rower, or rowing machine, is a machine used to simulate the action of watercraft rowing for the purpose of exercise or training for rowing. Modern indoor rowers are often known as ergometers (colloquially erg or ergo) because they measure work performed by the rower (which can be measured in ergs). Indoor rowing has become established as a sport, drawing a competitive environment from around the world. The term \"indoor rower\" also refers to a participant in this sport.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chabrias, an Athenian admiral of the 4th century BC, introduced the first rowing machines as supplemental military training devices. \"To train inexperienced oarsmen, Chabrias built wooden rowing frames onshore where beginners could learn technique and timing before they went onboard ship.\"",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Early rowing machines are known to have existed from the mid-1800s, a US patent being issued to W.B. Curtis in 1872 for a particular hydraulic-based damper design. Machines using linear pneumatic resistance were common around 1900—one of the most popular was the Narragansett hydraulic rower, manufactured in Rhode Island from around 1900–1960.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In the 1970s, the Gjessing-Nilson ergometer from Norway used a friction brake mechanism with industrial strapping applied over the broad rim of the flywheel. Weights hanging from the strap ensured that an adjustable and predictable friction could be calculated.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The first air resistance ergometers were introduced around 1980 by Repco. In 1981, Peter and Richard Dreissigacker, and Jonathan Williams, filed for U.S. patent protection, as joint inventors of a \"Stationary Rowing Unit\". The first commercial embodiment of the Concept2 \"rowing ergometer\" was the Model A, a fixed-frame sliding-seat design using a bicycle wheel with fins attached for air resistance. The Model B, introduced in 1986, introduced a solid cast flywheel and the first digital performance monitor, which proved revolutionary. This machine's capability of accurate calibration combined with easy transportability spawned the sport of competitive indoor rowing, and revolutionised training and selection procedures for watercraft rowing. Later models were the C (1993) and D (2003).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1995, Casper Rekers, a Dutch engineer, was granted a U.S. patent for a (US 5382210A) \"Dynamically Balanced Rowing Simulator\". This device differed from the prior art in that the flywheel and footrests are fixed to a carriage, the carriage being free to slide fore and aft on a rail or rails integral to the frame. The seat is also free to slide fore and aft on a rail or rails integral to the frame.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Modern indoor rowers have their resistance provided by a flywheel. Indoor rowers that utilise flywheel resistance can be categorised into two motion types. In both types, the rowing movement of the user causes the footrests and the seat to move further and closer apart in co-ordination with the user's stroke. The difference between the two types is in the movement, or absence of movement, of the footrests relative to ground.",
"title": "Equipment"
},
{
"paragraph_id": 7,
"text": "The first type is characterised by the Dreissigacker/Williams device. With this type the flywheel and footrests are fixed to a stationary frame, and the seat is free to slide fore and aft on a rail or rails integral to the stationary frame. Therefore, during use, the seat moves relative to the footrests and also relative to ground, while the flywheel and footrests remain stationary relative to ground.",
"title": "Equipment"
},
{
"paragraph_id": 8,
"text": "The second type is characterised by the Rekers device. With this type, both the seat and the footrests are free to slide fore and aft on a rail or rails integral to a stationary frame. Therefore, during use, the seat and the footrests move relative to each other, and both also move relative to ground.",
"title": "Equipment"
},
{
"paragraph_id": 9,
"text": "Piston resistance comes from hydraulic cylinders that are attached to the handles of the rowing machine.",
"title": "Equipment"
},
{
"paragraph_id": 10,
"text": "Braked flywheel resistance models comprise magnetic, air, and water resistance rowers.",
"title": "Equipment"
},
{
"paragraph_id": 11,
"text": "Magnetic resistance models control resistance by means of permanent magnets or electromagnets.",
"title": "Equipment"
},
{
"paragraph_id": 12,
"text": "Air resistance models use vanes on the flywheel to provide the flywheel braking needed to generate resistance.",
"title": "Equipment"
},
{
"paragraph_id": 13,
"text": "Water resistance models consist of a paddle revolving in an enclosed tank of water.",
"title": "Equipment"
},
{
"paragraph_id": 14,
"text": "Dual Resistance Rower is a professional fitness equipment with fan and magnetic brake resistance for a variety of intensity levels from warm-ups to HIIT intervals.",
"title": "Equipment"
},
{
"paragraph_id": 15,
"text": "Sometimes, slides are placed underneath the machine, which allows the machine to move back and forth smoothly as if there were water beneath the rower. The slides can be connected in rows or columns so that rowers are forced to move together on the ergometer, similarly to the way they would match up their rhythm in a boat.",
"title": "Equipment"
},
{
"paragraph_id": 16,
"text": "Indoor rowers usually also display estimates of rowing boat speed and energy used by the athlete.",
"title": "Equipment"
},
{
"paragraph_id": 17,
"text": "Rowing is an example of a method of aerobic exercise, which has been observed to improve athletes' VO2 peak. Indoor rowing primarily works the cardiovascular systems with typical workouts consisting of steady pieces of 20–40 minutes.",
"title": "Use"
},
{
"paragraph_id": 18,
"text": "The standard measurement of speed on an ergometer is generally known as the \"split\", or the amount of time in minutes and seconds required to travel 500 metres (1,600 ft) at the current pace. Other standard measurement units on the indoor rowing machine include calories and watts.",
"title": "Use"
},
{
"paragraph_id": 19,
"text": "Although ergometer tests are used by rowing coaches to evaluate rowers and are part of athlete selection for many senior and junior national rowing teams, data suggests that \"physiological and performance tests performed on a rowing ergometer are not good indicators of on-water performance\".",
"title": "Use"
},
{
"paragraph_id": 20,
"text": "Some standard indoor rower ergometer tests include: 250 m ergometer test, 2000 m ergometer test, 5 km ergometer test, 16 km ergometer test and the 30-minute ergometer test.",
"title": "Use"
},
{
"paragraph_id": 21,
"text": "Rowing on an ergometer requires four basic phases to complete one stroke; the catch, the drive, the finish and the recovery. The catch is the initial part of the stroke. The drive is where the power from the rower is generated while the finish is the final part of the stroke. Then, the recovery is the initial phase to begin taking a new stroke. The phases repeat until a time duration or a distance is completed. At each stage of the stroke the back should remain in a neutral, flat position, pivoting at the hips to avoid injury.",
"title": "Technique"
},
{
"paragraph_id": 22,
"text": "Knees are bent with the shins in a vertical position. The back should be roughly parallel to the thigh without hyperflexion (leaning forward too far). The arms and shoulders should be extended forward and relaxed. The arms should be level.",
"title": "Technique"
},
{
"paragraph_id": 23,
"text": "The drive is initiated by a push and extension of the legs; the body remains in the catch posture at this point of the drive. As the legs continue to full extension, the hip angle opens and the rower engages the core to begin the motion of the body levering backward, adding to the work of the legs. When the legs are fully extended, the rower begins to pull the handle toward the chest with their arms, completings the stroke with the handle half way up the body and the forearms parallel to the ground.",
"title": "Technique"
},
{
"paragraph_id": 24,
"text": "The legs are at full extension and flat. The shoulders are slightly behind the pelvis, and the arms are in full contraction with the elbows bent and hands against the chest below the nipples. The back of the rower is still maintained in an upright posture and wrists should be flat.",
"title": "Technique"
},
{
"paragraph_id": 25,
"text": "The recovery is a slow slide back to the initial part of the stroke, it gives the rower time to recover from the previous stroke. During the recovery the actions are in reverse order of the drive. The recovery is initiated by the extensions of the arms until fully extended in front of the body. The torso is then engaged by pivoting at the hips to move the torso in front of the hips. Weight transfers from the back of the seat to the front of the seat at this time. When the hands come over the knees, the legs are bent at the knees, moving the slide towards the front of the machine. As the back becomes more parallel to the thighs, the recovery is completed when the shins are perpendicular to the ground. At this point the recovery transitions to the catch for the next stroke.",
"title": "Technique"
},
{
"paragraph_id": 26,
"text": "The first indoor rowing competition was held in Cambridge, Massachusetts, in February 1982 with participation of 96 on-water rowers who called themselves the \"Charles River Association of Sculling Has-Beens\", hence the acronym, \"CRASH-B\". The core events for indoor rowing competitions that are currently competed in at the World Rowing Indoor Championships are the individual 500m, individual 2000m, individual 1 hour, and 3-minute teams event. Events at other indoor rowing competitions include the mile and the 2500-meter.",
"title": "Sport"
},
{
"paragraph_id": 27,
"text": "Most competitions are organised into categories based on sex, age, and weight class.",
"title": "Sport"
}
]
| An indoor rower, or rowing machine, is a machine used to simulate the action of watercraft rowing for the purpose of exercise or training for rowing. Modern indoor rowers are often known as ergometers because they measure work performed by the rower. Indoor rowing has become established as a sport, drawing a competitive environment from around the world. The term "indoor rower" also refers to a participant in this sport. | 2001-10-11T18:12:14Z | 2023-11-27T20:13:27Z | [
"Template:Use dmy dates",
"Template:TOCright",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Multiple issues",
"Template:Use British English",
"Template:Convert",
"Template:Citation",
"Template:Rowing (sport)",
"Template:Authority control",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Indoor_rower |
15,078 | Internetwork Packet Exchange | Internetwork Packet Exchange (IPX) is the network layer protocol in the IPX/SPX protocol suite. IPX is derived from Xerox Network Systems' IDP. It also has the ability to act as a transport layer protocol.
The IPX/SPX protocol suite was very popular through the late 1980s and mid-1990s because it was used by Novell NetWare, a network operating system. Due to Novell NetWare's popularity, IPX became a prominent protocol for internetworking.
A big advantage of IPX was a small memory footprint of the IPX driver, which was vital for DOS and Windows up to Windows 95 due to the limited size at that time of conventional memory. Another IPX advantage is easy configuration of its client computers. However, IPX does not scale well for large networks such as the Internet. As such, IPX usage decreased as the boom of the Internet made TCP/IP nearly universal.
Computers and networks can run multiple network protocols, so almost all IPX sites also run TCP/IP, to allow Internet connectivity. It is also possible to run later Novell products without IPX, with the beginning of full support for both IPX and TCP/IP by NetWare version 5 in late 1998.
A big advantage of IPX protocol is its little or no need for configuration. In the time when protocols for dynamic host configuration did not exist and the BOOTP protocol for centralized assigning of addresses was not common, the IPX network could be configured almost automatically. A client computer uses the MAC address of its network card as the node address and learns what it needs to know about the network topology from the servers or routers – routes are propagated by Routing Information Protocol, services by Service Advertising Protocol.
A small IPX network administrator had to care only
Each IPX packet begins with a header with the following structure:
The Packet Type values are:
An IPX address has the following structure:
The network number allows to address (and communicate with) the IPX nodes which do not belong to the same network or cabling system. The cabling system is a network in which a data link layer protocol can be used for communication. To allow communication between different networks, they must be connected with IPX routers. A set of interconnected networks is called an internetwork. Any Novell NetWare server may serve as an IPX router. Novell also supplied stand-alone routers. Multiprotocol routers of other vendors often support IPX routing. Using different frame formats in one cabling system is possible, but it works similarly as if separate cabling systems were used (i.e. different network numbers must be used for different frame formats even in the same cabling system and a router must be used to allow communication between nodes using different frame formats in the same cabling system).
The node number is used to address an individual computer (or more exactly, a network interface) in the network. Client stations use its network interface card MAC address as the node number.
The value FF:FF:FF:FF:FF:FF may be used as a node number in a destination address to broadcast a packet to "all nodes in the current network".
The socket number serves to select a process or application in the destination node. The presence of a socket number in the IPX address allows the IPX to act as a transport layer protocol, comparable with the User Datagram Protocol (UDP) in the Internet protocol suite.
The IPX network number is conceptually identical to the network part of the IP address (the parts with netmask bits set to 1); the node number has the same meaning as the bits of IP address with netmask bits set to 0. The difference is that the boundary between network and node part of address in IP is variable, while in IPX it is fixed. As the node address is usually identical to the MAC address of the network adapter, the Address Resolution Protocol is not needed in IPX.
For routing, the entries in the IPX routing table are similar to IP routing tables; routing is done by network address, and for each network address a network:node of the next router is specified in a similar fashion an IP address/netmask is specified in IP routing tables.
There are three routing protocols available for IPX networks. In early IPX networks, a version of Routing Information Protocol (RIP) was the only available protocol to exchange routing information. Unlike RIP for IP, it uses delay time as the main metric, retaining the hop count as a secondary metric. Since NetWare 3, the NetWare Link Services Protocol (NLSP) based on IS-IS is available, which is more suitable for larger networks. Cisco routers implement an IPX version of EIGRP protocol as well.
IPX can be transmitted over Ethernet using one of the following 4 frame formats or encapsulation types:
In non-Ethernet networks only 802.2 and SNAP frame types are available. | [
{
"paragraph_id": 0,
"text": "Internetwork Packet Exchange (IPX) is the network layer protocol in the IPX/SPX protocol suite. IPX is derived from Xerox Network Systems' IDP. It also has the ability to act as a transport layer protocol.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The IPX/SPX protocol suite was very popular through the late 1980s and mid-1990s because it was used by Novell NetWare, a network operating system. Due to Novell NetWare's popularity, IPX became a prominent protocol for internetworking.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A big advantage of IPX was a small memory footprint of the IPX driver, which was vital for DOS and Windows up to Windows 95 due to the limited size at that time of conventional memory. Another IPX advantage is easy configuration of its client computers. However, IPX does not scale well for large networks such as the Internet. As such, IPX usage decreased as the boom of the Internet made TCP/IP nearly universal.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Computers and networks can run multiple network protocols, so almost all IPX sites also run TCP/IP, to allow Internet connectivity. It is also possible to run later Novell products without IPX, with the beginning of full support for both IPX and TCP/IP by NetWare version 5 in late 1998.",
"title": ""
},
{
"paragraph_id": 4,
"text": "A big advantage of IPX protocol is its little or no need for configuration. In the time when protocols for dynamic host configuration did not exist and the BOOTP protocol for centralized assigning of addresses was not common, the IPX network could be configured almost automatically. A client computer uses the MAC address of its network card as the node address and learns what it needs to know about the network topology from the servers or routers – routes are propagated by Routing Information Protocol, services by Service Advertising Protocol.",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "A small IPX network administrator had to care only",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "Each IPX packet begins with a header with the following structure:",
"title": "IPX packet structure"
},
{
"paragraph_id": 7,
"text": "The Packet Type values are:",
"title": "IPX packet structure"
},
{
"paragraph_id": 8,
"text": "An IPX address has the following structure:",
"title": "IPX addressing"
},
{
"paragraph_id": 9,
"text": "The network number allows to address (and communicate with) the IPX nodes which do not belong to the same network or cabling system. The cabling system is a network in which a data link layer protocol can be used for communication. To allow communication between different networks, they must be connected with IPX routers. A set of interconnected networks is called an internetwork. Any Novell NetWare server may serve as an IPX router. Novell also supplied stand-alone routers. Multiprotocol routers of other vendors often support IPX routing. Using different frame formats in one cabling system is possible, but it works similarly as if separate cabling systems were used (i.e. different network numbers must be used for different frame formats even in the same cabling system and a router must be used to allow communication between nodes using different frame formats in the same cabling system).",
"title": "IPX addressing"
},
{
"paragraph_id": 10,
"text": "The node number is used to address an individual computer (or more exactly, a network interface) in the network. Client stations use its network interface card MAC address as the node number.",
"title": "IPX addressing"
},
{
"paragraph_id": 11,
"text": "The value FF:FF:FF:FF:FF:FF may be used as a node number in a destination address to broadcast a packet to \"all nodes in the current network\".",
"title": "IPX addressing"
},
{
"paragraph_id": 12,
"text": "The socket number serves to select a process or application in the destination node. The presence of a socket number in the IPX address allows the IPX to act as a transport layer protocol, comparable with the User Datagram Protocol (UDP) in the Internet protocol suite.",
"title": "IPX addressing"
},
{
"paragraph_id": 13,
"text": "The IPX network number is conceptually identical to the network part of the IP address (the parts with netmask bits set to 1); the node number has the same meaning as the bits of IP address with netmask bits set to 0. The difference is that the boundary between network and node part of address in IP is variable, while in IPX it is fixed. As the node address is usually identical to the MAC address of the network adapter, the Address Resolution Protocol is not needed in IPX.",
"title": "IPX addressing"
},
{
"paragraph_id": 14,
"text": "For routing, the entries in the IPX routing table are similar to IP routing tables; routing is done by network address, and for each network address a network:node of the next router is specified in a similar fashion an IP address/netmask is specified in IP routing tables.",
"title": "IPX addressing"
},
{
"paragraph_id": 15,
"text": "There are three routing protocols available for IPX networks. In early IPX networks, a version of Routing Information Protocol (RIP) was the only available protocol to exchange routing information. Unlike RIP for IP, it uses delay time as the main metric, retaining the hop count as a secondary metric. Since NetWare 3, the NetWare Link Services Protocol (NLSP) based on IS-IS is available, which is more suitable for larger networks. Cisco routers implement an IPX version of EIGRP protocol as well.",
"title": "IPX addressing"
},
{
"paragraph_id": 16,
"text": "IPX can be transmitted over Ethernet using one of the following 4 frame formats or encapsulation types:",
"title": "Frame formats"
},
{
"paragraph_id": 17,
"text": "In non-Ethernet networks only 802.2 and SNAP frame types are available.",
"title": "Frame formats"
}
]
| Internetwork Packet Exchange (IPX) is the network layer protocol in the IPX/SPX protocol suite. IPX is derived from Xerox Network Systems' IDP. It also has the ability to act as a transport layer protocol. The IPX/SPX protocol suite was very popular through the late 1980s and mid-1990s because it was used by Novell NetWare, a network operating system. Due to Novell NetWare's popularity, IPX became a prominent protocol for internetworking. A big advantage of IPX was a small memory footprint of the IPX driver, which was vital for DOS and Windows up to Windows 95 due to the limited size at that time of conventional memory. Another IPX advantage is easy configuration of its client computers. However, IPX does not scale well for large networks such as the Internet. As such, IPX usage decreased as the boom of the Internet made TCP/IP nearly universal. Computers and networks can run multiple network protocols, so almost all IPX sites also run TCP/IP, to allow Internet connectivity. It is also possible to run later Novell products without IPX, with the beginning of full support for both IPX and TCP/IP by NetWare version 5 in late 1998. | 2001-09-27T23:29:53Z | 2023-12-04T00:52:07Z | [
"Template:Cite book",
"Template:Cite web",
"Template:Short description",
"Template:Redirect",
"Template:Multiple issues"
]
| https://en.wikipedia.org/wiki/Internetwork_Packet_Exchange |
15,079 | International human rights instruments | International human rights instruments are the treaties and other international texts that serve as legal sources for international human rights law and the protection of human rights in general. There are many varying types, but most can be classified into two broad categories: declarations, adopted by bodies such as the United Nations General Assembly, which are by nature declaratory, so not legally-binding although they may be politically authoritative and very well-respected soft law;, and often express guiding principles; and conventions that are multi-party treaties that are designed to become legally binding, usually include prescriptive and very specific language, and usually are concluded by a long procedure that frequently requires ratification by each states' legislature. Lesser known are some "recommendations" which are similar to conventions in being multilaterally agreed, yet cannot be ratified, and serve to set common standards. There may also be administrative guidelines that are agreed multilaterally by states, as well as the statutes of tribunals or other institutions. A specific prescription or principle from any of these various international instruments can, over time, attain the status of customary international law whether it is specifically accepted by a state or not, just because it is well-recognized and followed over a sufficiently long time.
International human rights instruments can be divided further into global instruments, to which any state in the world can be a party, and regional instruments, which are restricted to states in a particular region of the world.
Most conventions and recommendations (but few declarations) establish mechanisms for monitoring and establish bodies to oversee their implementation. In some cases these bodies that may have relatively little political authority or legal means, and may be ignored by member states; in other cases these mechanisms have bodies with great political authority and their decisions are almost always implemented. A good example of the latter is the European Court of Human Rights.
Monitoring mechanisms also vary as to the degree of individual access to expose cases of abuse and plea for remedies. Under some conventions or recommendations – e.g. the European Convention on Human Rights – individuals or states are permitted, subject to certain conditions, to take individual cases to a full-fledged tribunal at international level. Sometimes, this can be done in national courts because of universal jurisdiction.
The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights together with other international human rights instruments are sometimes referred to as the "International Bill of Human Rights". International human rights instruments are identified by the OHCHR and most are referenced on the OHCHR website.
According to OHCHR, there are 9 or more core international human rights instruments and several optional protocols. Among the well-known instruments are:
Several more human rights instruments exist. A few examples: | [
{
"paragraph_id": 0,
"text": "International human rights instruments are the treaties and other international texts that serve as legal sources for international human rights law and the protection of human rights in general. There are many varying types, but most can be classified into two broad categories: declarations, adopted by bodies such as the United Nations General Assembly, which are by nature declaratory, so not legally-binding although they may be politically authoritative and very well-respected soft law;, and often express guiding principles; and conventions that are multi-party treaties that are designed to become legally binding, usually include prescriptive and very specific language, and usually are concluded by a long procedure that frequently requires ratification by each states' legislature. Lesser known are some \"recommendations\" which are similar to conventions in being multilaterally agreed, yet cannot be ratified, and serve to set common standards. There may also be administrative guidelines that are agreed multilaterally by states, as well as the statutes of tribunals or other institutions. A specific prescription or principle from any of these various international instruments can, over time, attain the status of customary international law whether it is specifically accepted by a state or not, just because it is well-recognized and followed over a sufficiently long time.",
"title": ""
},
{
"paragraph_id": 1,
"text": "International human rights instruments can be divided further into global instruments, to which any state in the world can be a party, and regional instruments, which are restricted to states in a particular region of the world.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Most conventions and recommendations (but few declarations) establish mechanisms for monitoring and establish bodies to oversee their implementation. In some cases these bodies that may have relatively little political authority or legal means, and may be ignored by member states; in other cases these mechanisms have bodies with great political authority and their decisions are almost always implemented. A good example of the latter is the European Court of Human Rights.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Monitoring mechanisms also vary as to the degree of individual access to expose cases of abuse and plea for remedies. Under some conventions or recommendations – e.g. the European Convention on Human Rights – individuals or states are permitted, subject to certain conditions, to take individual cases to a full-fledged tribunal at international level. Sometimes, this can be done in national courts because of universal jurisdiction.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights together with other international human rights instruments are sometimes referred to as the \"International Bill of Human Rights\". International human rights instruments are identified by the OHCHR and most are referenced on the OHCHR website.",
"title": ""
},
{
"paragraph_id": 5,
"text": "According to OHCHR, there are 9 or more core international human rights instruments and several optional protocols. Among the well-known instruments are:",
"title": "Conventions"
},
{
"paragraph_id": 6,
"text": "Several more human rights instruments exist. A few examples:",
"title": "Conventions"
}
]
| International human rights instruments are the treaties and other international texts that serve as legal sources for international human rights law and the protection of human rights in general. There are many varying types, but most can be classified into two broad categories: declarations, adopted by bodies such as the United Nations General Assembly, which are by nature declaratory, so not legally-binding although they may be politically authoritative and very well-respected soft law;, and often express guiding principles; and conventions that are multi-party treaties that are designed to become legally binding, usually include prescriptive and very specific language, and usually are concluded by a long procedure that frequently requires ratification by each states' legislature. Lesser known are some "recommendations" which are similar to conventions in being multilaterally agreed, yet cannot be ratified, and serve to set common standards. There may also be administrative guidelines that are agreed multilaterally by states, as well as the statutes of tribunals or other institutions. A specific prescription or principle from any of these various international instruments can, over time, attain the status of customary international law whether it is specifically accepted by a state or not, just because it is well-recognized and followed over a sufficiently long time. International human rights instruments can be divided further into global instruments, to which any state in the world can be a party, and regional instruments, which are restricted to states in a particular region of the world. Most conventions and recommendations establish mechanisms for monitoring and establish bodies to oversee their implementation. In some cases these bodies that may have relatively little political authority or legal means, and may be ignored by member states; in other cases these mechanisms have bodies with great political authority and their decisions are almost always implemented. A good example of the latter is the European Court of Human Rights. Monitoring mechanisms also vary as to the degree of individual access to expose cases of abuse and plea for remedies. Under some conventions or recommendations – e.g. the European Convention on Human Rights – individuals or states are permitted, subject to certain conditions, to take individual cases to a full-fledged tribunal at international level. Sometimes, this can be done in national courts because of universal jurisdiction. The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights together with other international human rights instruments are sometimes referred to as the "International Bill of Human Rights". International human rights instruments are identified by the OHCHR and most are referenced on the OHCHR website. | 2002-02-25T15:51:15Z | 2023-12-27T09:47:12Z | [
"Template:Cite journal",
"Template:International human rights instruments",
"Template:Authority control",
"Template:Short description",
"Template:More citations needed",
"Template:Reflist",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/International_human_rights_instruments |
15,080 | Indian removal | Indian removal was the United States government policy of forced displacement of self-governing tribes of Native Americans from their ancestral homelands in the eastern United States to lands west of the Mississippi River – specifically, to a designated Indian Territory (roughly, present-day Oklahoma). The Indian Removal Act, the key law which authorized the removal of Native tribes, was signed by Andrew Jackson in 1830. Although Jackson took a hard line on Indian removal, the law was enforced primarily during the Martin Van Buren administration. After the passage of the Indian Removal Act in 1830, approximately 60,000 members of the Cherokee, Muscogee (Creek), Seminole, Chickasaw, and Choctaw nations (including thousands of their black slaves) were forcibly removed from their ancestral homelands, with thousands dying during the Trail of Tears.
Indian removal, a popular policy among incoming settlers, was a consequence of actions by European settlers in North America during the colonial period and then by the United States government (and its citizens) until the mid-20th century. The policy traced its origins to the administration of James Monroe, although it addressed conflicts between European and Native Americans which had occurred since the 17th century and were escalating into the early 19th century (as European settlers pushed westward in the cultural belief of manifest destiny). Historical views of Indian removal have been reevaluated since that time. Widespread contemporary acceptance of the policy, due in part to the popular embrace of the concept of manifest destiny, has given way to a more somber perspective. Historians have often described the removal of Native Americans as paternalism, ethnic cleansing, or genocide.
American leaders in the Revolutionary and early US eras debated about whether Native Americans should be treated as individuals or as nations.
In the indictment section of the Declaration of Independence, the Indigenous inhabitants of the United States are referred to as "merciless Indian Savages", reflecting a commonly held view at the time by the colonists in the United States.
In a draft "Proposed Articles of Confederation" presented to the Continental Congress on May 10, 1775, Benjamin Franklin called for a "perpetual Alliance" with the Indians in the nation about to be born, particularly with the six nations of the Iroquois Confederacy:
Article XI. A perpetual alliance offensive and defensive is to be entered into as soon as may be with the Six Nations; their Limits to be ascertained and secured to them; their Land not to be encroached on, nor any private or Colony Purchases made of them hereafter to be held good, nor any Contract for Lands to be made but between the Great Council of the Indians at Onondaga and the General Congress. The Boundaries and Lands of all the other Indians shall also be ascertained and secured to them in the same manner; and Persons appointed to reside among them in proper Districts, who shall take care to prevent Injustice in the Trade with them, and be enabled at our general Expense by occasional small Supplies, to relieve their personal Wants and Distresses. And all Purchases from them shall be by the Congress for the General Advantage and Benefit of the United Colonies.
The Confederation Congress passed the Northwest Ordinance of 1787 (a precedent for U.S. territorial expansion would occur for years to come), calling for the protection of Native American "property, rights, and liberty"; the U.S. Constitution of 1787 (Article I, Section 8) made Congress responsible for regulating commerce with the Indian tribes. In 1790, the new U.S. Congress passed the Indian Nonintercourse Act (renewed and amended in 1793, 1796, 1799, 1802, and 1834) to protect and codify the land rights of recognized tribes.
President George Washington, in his 1790 address to the Seneca Nation which called the pre-Constitutional Indian land-sale difficulties "evils", said that the case was now altered and pledged to uphold Native American "just rights". In March and April 1792, Washington met with 50 tribal chiefs in Philadelphia—including the Iroquois—to discuss strengthening the friendship between them and the United States. Later that year, in his fourth annual message to Congress, Washington stressed the need to build peace, trust, and commerce with Native Americans:
I cannot dismiss the subject of Indian affairs without again recommending to your consideration the expediency of more adequate provision for giving energy to the laws throughout our interior frontier, and for restraining the commission of outrages upon the Indians; without which all pacific plans must prove nugatory. To enable, by competent rewards, the employment of qualified and trusty persons to reside among them, as agents, would also contribute to the preservation of peace and good neighbourhood. If, in addition to these expedients, an eligible plan could be devised for promoting civilization among the friendly tribes, and for carrying on trade with them, upon a scale equal to their wants, and under regulations calculated to protect them from imposition and extortion, its influence in cementing their interests with our's [sic] could not but be considerable.
In his seventh annual message to Congress in 1795, Washington intimated that if the U.S. government wanted peace with the Indians it must behave peacefully; if the U.S. wanted raids by Indians to stop, raids by American "frontier inhabitants" must also stop.
In his Notes on the State of Virginia (1785), Thomas Jefferson defended Native American culture and marvelled at how the tribes of Virginia "never submitted themselves to any laws, any coercive power, any shadow of government" due to their "moral sense of right and wrong". He wrote to the Marquis de Chastellux later that year, "I believe the Indian then to be in body and mind equal to the whiteman". Jefferson's desire, as interpreted by Francis Paul Prucha, was for Native Americans to intermix with European Americans and become one people. To achieve that end as president, Jefferson offered U.S. citizenship to some Indian nations and proposed offering them credit to facilitate trade.
On 27 February 1803, Jefferson wrote in a letter to William Henry Harrison:
In this way our settlements will gradually circumbscribe & approach the Indians, & they will in time either incorporate with us as citizens of the US. or remove beyond the Missisipi. The former is certainly the termination of their history most happy for themselves. But in the whole course of this, it is essential to cultivate their love. As to their fear, we presume that our strength & their weakness is now so visible that they must see we have only to shut our hand to crush them, & that all our liberalities to them proceed from motives of pure humanity only.
As president, Thomas Jefferson developed a far-reaching Indian policy with two primary goals. He wanted to assure that the Native nations (not foreign nations) were tightly bound to the new United States, as he considered the security of the nation to be paramount. He also wanted to "civilize" them into adopting an agricultural, rather than a hunter-gatherer, lifestyle. These goals would be achieved through treaties and the development of trade.
Jefferson initially promoted an American policy which encouraged Native Americans to become assimilated, or "civilized". He made sustained efforts to win the friendship and cooperation of many Native American tribes as president, repeatedly articulating his desire for a united nation of whites and Indians as in his November 3, 1802, letter to Seneca spiritual leader Handsome Lake:
Go on then, brother, in the great reformation you have undertaken ... In all your enterprises for the good of your people, you may count with confidence on the aid and protection of the United States, and on the sincerity and zeal with which I am myself animated in the furthering of this humane work. You are our brethren of the same land; we wish your prosperity as brethren should do. Farewell.
When a delegation from the Cherokee Nation's Upper Towns lobbied Jefferson for the full and equal citizenship promised to Indians living in American territory by George Washington, his response indicated that he was willing to grant citizenship to those Indian nations who sought it. In his eighth annual message to Congress on November 8, 1808, he presented a vision of white and Indian unity:
With our Indian neighbors the public peace has been steadily maintained ... And, generally, from a conviction that we consider them as part of ourselves, and cherish with sincerity their rights and interests, the attachment of the Indian tribes is gaining strength daily... and will amply requite us for the justice and friendship practiced towards them ... [O]ne of the two great divisions of the Cherokee nation have now under consideration to solicit the citizenship of the United States, and to be identified with us in-laws and government, in such progressive manner as we shall think best.
As some of Jefferson's other writings illustrate, however, he was ambivalent about Indian assimilation and used the words "exterminate" and "extirpate" about tribes who resisted American expansion and were willing to fight for their lands. Jefferson intended to change Indian lifestyles from hunting and gathering to farming, largely through "the decrease of game rendering their subsistence by hunting insufficient". He expected the change to agriculture to make them dependent on white Americans for goods, and more likely to surrender their land or allow themselves to be moved west of the Mississippi River. In an 1803 letter to William Henry Harrison, Jefferson wrote:
Should any tribe be foolhardy enough to take up the hatchet at any time, the seizing the whole country of that tribe, and driving them across the Mississippi, as the only condition of peace, would be an example to others, and a furtherance of our final consolidation.
In that letter, Jefferson spoke about protecting the Indians from injustices perpetrated by settlers:
Our system is to live in perpetual peace with the Indians, to cultivate an affectionate attachment from them, by everything just and liberal which we can do for them within ... reason, and by giving them effectual protection against wrongs from our own people.
According to the treaty of February 27, 1819, the U.S. government would offer citizenship and 640 acres (260 ha) of land per family to Cherokees who lived east of the Mississippi. Native American land was sometimes purchased, by treaty or under duress. The idea of land exchange, that Native Americans would give up their land east of the Mississippi in exchange for a similar amount of territory west of the river, was first proposed by Jefferson in 1803 and first incorporated into treaties in 1817 (years after the Jefferson presidency). The Indian Removal Act of 1830 included this concept.
Under President James Monroe, Secretary of War John C. Calhoun devised the first plans for Indian removal. Monroe approved Calhoun's plans by late 1824 and, in a special message to the Senate on January 27, 1825, requested the creation of the Arkansaw and Indian Territories; the Indians east of the Mississippi would voluntarily exchange their lands for lands west of the river. The Senate accepted Monroe's request, and asked Calhoun to draft a bill which was killed in the House of Representatives by the Georgia delegation. President John Quincy Adams assumed the Calhoun–Monroe policy, and was determined to remove the Indians by non-forceful means; Georgia refused to consent to Adams' request, forcing the president to forge a treaty with the Cherokees granting Georgia the Cherokee lands. On July 26, 1827, the Cherokee Nation adopted a written constitution (modeled on that of the United States) which declared that they were an independent nation with jurisdiction over their own lands. Georgia contended that it would not countenance a sovereign state within its own territory, and asserted its authority over Cherokee territory. When Andrew Jackson became president as the candidate of the newly-organized Democratic Party, he agreed that the Indians should be forced to exchange their eastern lands for western lands (including relocation) and vigorously enforced Indian removal.
Although Indian removal was a popular policy, it was also opposed on legal and moral grounds; it also ran counter to the formal, customary diplomatic interaction between the federal government and the Native nations. Ralph Waldo Emerson wrote the widely-published letter "A Protest Against the Removal of the Cherokee Indians from the State of Georgia" in 1838, shortly before the Cherokee removal. Emerson criticizes the government and its removal policy, saying that the removal treaty was illegitimate; it was a "sham treaty", which the U.S. government should not uphold. He describes removal as
such a dereliction of all faith and virtues, such a denial of justice…in the dealing of a nation with its own allies and wards since the earth was made…a general expression of despondency, of disbelief, that any goodwill accrues from a remonstrance on an act of fraud and robbery, appeared in those men to whom we naturally turn for aid and counsel.
Emerson concludes his letter by saying that it should not be a political issue, urging President Martin Van Buren to prevent the enforcement of Cherokee removal. Other individual settlers and settler social organizations throughout the United States also opposed removal.
Native groups reshaped their governments, made constitutions and legal codes, and sent delegates to Washington to negotiate policies and treaties to uphold their autonomy and ensure federally-promised protection from the encroachment of states. They thought that acclimating, as the U.S. wanted them to, would stem removal policy and create a better relationship with the federal government and surrounding states.
Native American nations had differing views about removal. Although most wanted to remain on their native lands and do anything possible to ensure that, others believed that removal to a nonwhite area was their only option to maintain their autonomy and culture. The U.S. used this division to forge removal treaties with (often) minority groups who became convinced that removal was the best option for their people. These treaties were often not acknowledged by most of a nation's people. When Congress ratified the removal treaty, the federal government could use military force to remove Native nations if they had not moved (or had begun moving) by the date stipulated in the treaty.
When Andrew Jackson became president of the United States in 1829, his government took a hard line on Indian removal; Jackson abandoned his predecessors' policy of treating Indian tribes as separate nations, aggressively pursuing all Indians east of the Mississippi who claimed constitutional sovereignty and independence from state laws. They were to be removed to reservations in Indian Territory, west of the Mississippi (present-day Oklahoma), where they could exist without state interference. At Jackson's request, Congress began a debate on an Indian-removal bill. After fierce disagreement, the Senate passed the bill by a 28–19 vote; the House had narrowly passed it, 102–97. Jackson signed the Indian Removal Act into law on May 30, 1830.
That year, most of the Five Civilized Tribes—the Chickasaw, Choctaw, Creek, Seminole, and Cherokee—lived east of the Mississippi. The Indian Removal Act implemented federal-government policy towards its Indian populations, moving Native American tribes east of the Mississippi to lands west of the river. Although the act did not authorize the forced removal of indigenous tribes, it enabled the president to negotiate land-exchange treaties.
On September 27, 1830, the Choctaw signed the Treaty of Dancing Rabbit Creek and became the first Native American tribe to be removed. The agreement was one of the largest transfers of land between the U.S. government and Native Americans which was not the result of war. The Choctaw signed away their remaining traditional homelands, opening them up for European–American settlement in Mississippi Territory. When the tribe reached Little Rock, a chief called its trek a "trail of tears and death".
In 1831, French historian and political scientist Alexis de Tocqueville witnessed an exhausted group of Choctaw men, women and children emerging from the forest during an exceptionally cold winter near Memphis, Tennessee, on their way to the Mississippi to be loaded onto a steamboat. He wrote,
In the whole scene there was an air of ruin and destruction, something which betrayed a final and irrevocable adieu; one couldn't watch without feeling one's heart wrung. The Indians were tranquil but sombre and taciturn. There was one who could speak English and of whom I asked why the Chactas were leaving their country. "To be free," he answered, could never get any other reason out of him. We ... watch the expulsion ... of one of the most celebrated and ancient American peoples.
While the Indian Removal Act made the move of the tribes voluntary, it was often abused by government officials. The best-known example is the Treaty of New Echota, which was signed by a small faction of twenty Cherokee tribal members (not the tribal leadership) on December 29, 1835. Most of the Cherokee later blamed the faction and the treaty for the tribe's forced relocation in 1838. An estimated 4,000 Cherokee died in the march, which is known as the Trail of Tears. Missionary organizer Jeremiah Evarts urged the Cherokee Nation to take its case to the U.S. Supreme Court.
The Marshall court heard the case in Cherokee Nation v. Georgia (1831), but declined to rule on its merits; the court declaring that the Native American tribes were not sovereign nations, and could not "maintain an action" in U.S. courts. In an opinion written by Chief Justice Marshall in Worcester v. Georgia (1832), individual states had no authority in American Indian affairs.
The state of Georgia defied the Supreme Court ruling, and the desire of settlers and land speculators for Indian lands continued unabated; some whites claimed that Indians threatened peace and security. The Georgia legislature passed a law forbidding settlers from living on Indian territory after March 31, 1831, without a license from the state; this excluded missionaries who opposed Indian removal.
The Seminole refused to leave their Florida lands in 1835, leading to the Second Seminole War. Osceola was a Seminole leader of the people's fight against removal. Based in the Everglades, Osceola and his band used surprise attacks to defeat the U.S. Army in a number of battles. In 1837, Osceola was duplicitously captured by order of U.S. General Thomas Jesup when Osceola came under a flag of truce to negotiate peace near Fort Peyton. Osceola died in prison of illness; the war resulted in over 1,500 U.S. deaths, and cost the government $20 million. Some Seminole traveled deeper into the Everglades, and others moved west. The removal continued, and a number of wars broke out over land.In 1823, the Seminole signed the Treaty of Moultrie Creek, which reduced their 34 million to 4 millions acres.
In the aftermath of the Treaties of Fort Jackson, and the Washington, the Muscogee were confined to a small strip of land in present-day east central Alabama. The Creek national council signed the Treaty of Cusseta in 1832, ceding their remaining lands east of the Mississippi to the U.S. and accepting relocation to the Indian Territory. Most Muscogee were removed to the territory during the Trail of Tears in 1834, although some remained behind. Although the Creek War of 1836 ended government attempts to convince the Creek population to leave voluntarily, Creeks who had not participated in the war were not forced west (as others were). The Creek population was placed into camps and told that they would be relocated soon. Many Creek leaders were surprised by the quick departure but could do little to challenge it. The 16,000 Creeks were organized into five detachments who were to be sent to Fort Gibson. The Creek leaders did their best to negotiate better conditions, and succeeded in obtaining wagons and medicine. To prepare for the relocation, Creeks began to deconstruct their spiritual lives; they burned piles of lightwood over their ancestors' graves to honor their memories, and polished the sacred plates which would travel at the front of each group. They also prepared financially, selling what they could not bring. Many were swindled by local merchants out of valuable possessions (including land), and the military had to intervene. The detachments began moving west in September 1836, facing harsh conditions. Despite their preparations, the detachments faced bad roads, worse weather, and a lack of drinkable water. When all five detachments reached their destination, they recorded their death toll. The first detachment, with 2,318 Creeks, had 78 deaths; the second had 3,095 Creeks, with 37 deaths. The third had 2,818 Creeks, and 12 deaths; the fourth, 2,330 Creeks and 36 deaths. The fifth detachment, with 2,087 Creeks, had 25 deaths. In 1837 outside of Baton Rouge, Louisiana over 300 Creeks being forcibly removed to Western prairies drowned in the Mississippi River.
Friends and Brothers – By permission of the Great Spirit above, and the voice of the people, I have been made President of the United States, and now speak to you as your Father and friend,and request you to listen. Your warriors have known me long. You know I love my white and red children, and always speak with a straight, and not with a forked tongue; that I have always told you the truth ... Where you now are, you and my white children are too near to each other to live in harmony and peace. Your game is destroyed, and many of your people will not work and till the earth. Beyond the great River Mississippi, where a part of your nation has gone, your Father has provided a country large enough for all of you, and he advises you to remove to it. There your white brothers will not trouble you; they will have no claim to the land, and you can live upon it you and all your children, as long as the grass grows or the water runs, in peace and plenty. It will be yours forever. For the improvements in the country where you now live, and for all the stock which you cannot take with you, your Father will pay you a fair price ...
Unlike other tribes, who exchanged lands, the Chickasaw were to receive financial compensation of $3 million from the United States for their lands east of the Mississippi River. They reached an agreement to purchase of land from the previously-removed Choctaw in 1836 after a bitter five-year debate, paying the Chocktaw $530,000 for the westernmost Choctaw land. Most of the Chickasaw moved in 1837 and 1838. The $3 million owed to the Chickasaw by the U.S. went unpaid for nearly 30 years.
The Five Civilized Tribes were resettled in the new Indian Territory. The Cherokee occupied the northeast corner of the territory and a 70-mile-wide (110 km) strip of land in Kansas on its border with the territory. Some indigenous nations resisted the forced migration more strongly. The few who stayed behind eventually formed tribal groups, including the Eastern Band of Cherokee (based in North Carolina), the Mississippi Band of Choctaw Indians, the Seminole Tribe of Florida, and the Creeks in Alabama (including the Poarch Band).
Tribes in the Old Northwest were smaller and more fragmented than the Five Civilized Tribes, so the treaty and emigration process was more piecemeal. Following the Northwest Indian War, most of the modern state of Ohio was taken from native nations in the 1795 Treaty of Greenville. Tribes such as the already-displaced Lenape (Delaware tribe), Kickapoo and Shawnee, were removed from Indiana, Michigan, and Ohio during the 1820s. The Potawatomi were forced out of Wisconsin and Michigan in late 1838, and were resettled in Kansas Territory. Communities remaining in present-day Ohio were forced to move to Louisiana, which was then controlled by Spain.
Bands of Shawnee, Ottawa, Potawatomi, Sauk, and Meskwaki (Fox) signed treaties and relocated to the Indian Territory. In 1832, the Sauk leader Black Hawk led a band of Sauk and Fox back to their lands in Illinois; the U.S. Army and Illinois militia defeated Black Hawk and his warriors in the Black Hawk War, and the Sauk and Fox were relocated to present-day Iowa. The Miami were split, with many of the tribe resettled west of the Mississippi River during the 1840s.
In the Second Treaty of Buffalo Creek (1838), the Senecas transferred all their land in New York (except for one small reservation) in exchange for 200,000 acres (810 km) of land in Indian Territory. The federal government would be responsible for the removal of the Senecas who opted to go west, and the Ogden Land Company would acquire their New York lands. The lands were sold by government officials, however, and the proceeds were deposited in the U.S. Treasury. Maris Bryant Pierce, a "young chief" served as a lawyer representing four territories of the Seneca tribe, starting in 1838. The Senecas asserted that they had been defrauded, and sued for redress in the Court of Claims. The case was not resolved until 1898, when the United States awarded $1,998,714.46 (~$60.3 million in 2022) in compensation to "the New York Indians". The U.S. signed treaties with the Senecas and the Tonawanda Senecas in 1842 and 1857, respectively. Under the treaty of 1857, the Tonawandas renounced all claim to lands west of the Mississippi in exchange for the right to buy back the Tonawanda Reservation from the Ogden Land Company. Over a century later, the Senecas purchased a 9-acre (3.6 ha) plot (part of their original reservation) in downtown Buffalo to build the Seneca Buffalo Creek Casino.
Historical views of Indian removal have been reevaluated since that time. Widespread contemporary acceptance of the policy, due in part to the popular embrace of the concept of manifest destiny, has given way to a more somber perspective. Historians have often described the removal of Native Americans as paternalism, ethnic cleansing, or genocide. Historian David Stannard has called it genocide.
Andrew Jackson's Indian policy stirred a lot of public controversy before his enactment, but virtually none among historians and biographers of the 19th and early 20th century. However, his recent reputation has been negatively affected by his treatment of the Indians. Historians who admire Jackson's strong presidential leadership, such as Arthur M. Schlesinger, Jr., would gloss over the Indian Removal in a footnote. In 1969, Francis Paul Prucha defended Jackson's Indian policy and wrote that Jackson's removal of the Five Civilized Tribes from the hostile political environment of the Old South to Oklahoma probably saved them. Jackson was sharply attacked by political scientist Michael Rogin and historian Howard Zinn during the 1970s, primarily on this issue; Zinn called him an "exterminator of Indians". According to historians Paul R. Bartrop and Steven L. Jacobs, however, Jackson's policies do not meet the criteria for physical or cultural genocide. Historian Sean Wilentz describes the view of Jacksonian "infantilization" and "genocide" of the Indians, as a historical caricature, which "turns tragedy into melodrama, exaggerates parts at the expense of the whole, and sacrifices nuance for sharpness". | [
{
"paragraph_id": 0,
"text": "Indian removal was the United States government policy of forced displacement of self-governing tribes of Native Americans from their ancestral homelands in the eastern United States to lands west of the Mississippi River – specifically, to a designated Indian Territory (roughly, present-day Oklahoma). The Indian Removal Act, the key law which authorized the removal of Native tribes, was signed by Andrew Jackson in 1830. Although Jackson took a hard line on Indian removal, the law was enforced primarily during the Martin Van Buren administration. After the passage of the Indian Removal Act in 1830, approximately 60,000 members of the Cherokee, Muscogee (Creek), Seminole, Chickasaw, and Choctaw nations (including thousands of their black slaves) were forcibly removed from their ancestral homelands, with thousands dying during the Trail of Tears.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Indian removal, a popular policy among incoming settlers, was a consequence of actions by European settlers in North America during the colonial period and then by the United States government (and its citizens) until the mid-20th century. The policy traced its origins to the administration of James Monroe, although it addressed conflicts between European and Native Americans which had occurred since the 17th century and were escalating into the early 19th century (as European settlers pushed westward in the cultural belief of manifest destiny). Historical views of Indian removal have been reevaluated since that time. Widespread contemporary acceptance of the policy, due in part to the popular embrace of the concept of manifest destiny, has given way to a more somber perspective. Historians have often described the removal of Native Americans as paternalism, ethnic cleansing, or genocide.",
"title": ""
},
{
"paragraph_id": 2,
"text": "American leaders in the Revolutionary and early US eras debated about whether Native Americans should be treated as individuals or as nations.",
"title": "Revolutionary background"
},
{
"paragraph_id": 3,
"text": "In the indictment section of the Declaration of Independence, the Indigenous inhabitants of the United States are referred to as \"merciless Indian Savages\", reflecting a commonly held view at the time by the colonists in the United States.",
"title": "Revolutionary background"
},
{
"paragraph_id": 4,
"text": "In a draft \"Proposed Articles of Confederation\" presented to the Continental Congress on May 10, 1775, Benjamin Franklin called for a \"perpetual Alliance\" with the Indians in the nation about to be born, particularly with the six nations of the Iroquois Confederacy:",
"title": "Revolutionary background"
},
{
"paragraph_id": 5,
"text": "Article XI. A perpetual alliance offensive and defensive is to be entered into as soon as may be with the Six Nations; their Limits to be ascertained and secured to them; their Land not to be encroached on, nor any private or Colony Purchases made of them hereafter to be held good, nor any Contract for Lands to be made but between the Great Council of the Indians at Onondaga and the General Congress. The Boundaries and Lands of all the other Indians shall also be ascertained and secured to them in the same manner; and Persons appointed to reside among them in proper Districts, who shall take care to prevent Injustice in the Trade with them, and be enabled at our general Expense by occasional small Supplies, to relieve their personal Wants and Distresses. And all Purchases from them shall be by the Congress for the General Advantage and Benefit of the United Colonies.",
"title": "Revolutionary background"
},
{
"paragraph_id": 6,
"text": "The Confederation Congress passed the Northwest Ordinance of 1787 (a precedent for U.S. territorial expansion would occur for years to come), calling for the protection of Native American \"property, rights, and liberty\"; the U.S. Constitution of 1787 (Article I, Section 8) made Congress responsible for regulating commerce with the Indian tribes. In 1790, the new U.S. Congress passed the Indian Nonintercourse Act (renewed and amended in 1793, 1796, 1799, 1802, and 1834) to protect and codify the land rights of recognized tribes.",
"title": "Revolutionary background"
},
{
"paragraph_id": 7,
"text": "President George Washington, in his 1790 address to the Seneca Nation which called the pre-Constitutional Indian land-sale difficulties \"evils\", said that the case was now altered and pledged to uphold Native American \"just rights\". In March and April 1792, Washington met with 50 tribal chiefs in Philadelphia—including the Iroquois—to discuss strengthening the friendship between them and the United States. Later that year, in his fourth annual message to Congress, Washington stressed the need to build peace, trust, and commerce with Native Americans:",
"title": "Revolutionary background"
},
{
"paragraph_id": 8,
"text": "I cannot dismiss the subject of Indian affairs without again recommending to your consideration the expediency of more adequate provision for giving energy to the laws throughout our interior frontier, and for restraining the commission of outrages upon the Indians; without which all pacific plans must prove nugatory. To enable, by competent rewards, the employment of qualified and trusty persons to reside among them, as agents, would also contribute to the preservation of peace and good neighbourhood. If, in addition to these expedients, an eligible plan could be devised for promoting civilization among the friendly tribes, and for carrying on trade with them, upon a scale equal to their wants, and under regulations calculated to protect them from imposition and extortion, its influence in cementing their interests with our's [sic] could not but be considerable.",
"title": "Revolutionary background"
},
{
"paragraph_id": 9,
"text": "In his seventh annual message to Congress in 1795, Washington intimated that if the U.S. government wanted peace with the Indians it must behave peacefully; if the U.S. wanted raids by Indians to stop, raids by American \"frontier inhabitants\" must also stop.",
"title": "Revolutionary background"
},
{
"paragraph_id": 10,
"text": "In his Notes on the State of Virginia (1785), Thomas Jefferson defended Native American culture and marvelled at how the tribes of Virginia \"never submitted themselves to any laws, any coercive power, any shadow of government\" due to their \"moral sense of right and wrong\". He wrote to the Marquis de Chastellux later that year, \"I believe the Indian then to be in body and mind equal to the whiteman\". Jefferson's desire, as interpreted by Francis Paul Prucha, was for Native Americans to intermix with European Americans and become one people. To achieve that end as president, Jefferson offered U.S. citizenship to some Indian nations and proposed offering them credit to facilitate trade.",
"title": "Revolutionary background"
},
{
"paragraph_id": 11,
"text": "On 27 February 1803, Jefferson wrote in a letter to William Henry Harrison:",
"title": "Revolutionary background"
},
{
"paragraph_id": 12,
"text": "In this way our settlements will gradually circumbscribe & approach the Indians, & they will in time either incorporate with us as citizens of the US. or remove beyond the Missisipi. The former is certainly the termination of their history most happy for themselves. But in the whole course of this, it is essential to cultivate their love. As to their fear, we presume that our strength & their weakness is now so visible that they must see we have only to shut our hand to crush them, & that all our liberalities to them proceed from motives of pure humanity only.",
"title": "Revolutionary background"
},
{
"paragraph_id": 13,
"text": "As president, Thomas Jefferson developed a far-reaching Indian policy with two primary goals. He wanted to assure that the Native nations (not foreign nations) were tightly bound to the new United States, as he considered the security of the nation to be paramount. He also wanted to \"civilize\" them into adopting an agricultural, rather than a hunter-gatherer, lifestyle. These goals would be achieved through treaties and the development of trade.",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 14,
"text": "Jefferson initially promoted an American policy which encouraged Native Americans to become assimilated, or \"civilized\". He made sustained efforts to win the friendship and cooperation of many Native American tribes as president, repeatedly articulating his desire for a united nation of whites and Indians as in his November 3, 1802, letter to Seneca spiritual leader Handsome Lake:",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 15,
"text": "Go on then, brother, in the great reformation you have undertaken ... In all your enterprises for the good of your people, you may count with confidence on the aid and protection of the United States, and on the sincerity and zeal with which I am myself animated in the furthering of this humane work. You are our brethren of the same land; we wish your prosperity as brethren should do. Farewell.",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 16,
"text": "When a delegation from the Cherokee Nation's Upper Towns lobbied Jefferson for the full and equal citizenship promised to Indians living in American territory by George Washington, his response indicated that he was willing to grant citizenship to those Indian nations who sought it. In his eighth annual message to Congress on November 8, 1808, he presented a vision of white and Indian unity:",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 17,
"text": "With our Indian neighbors the public peace has been steadily maintained ... And, generally, from a conviction that we consider them as part of ourselves, and cherish with sincerity their rights and interests, the attachment of the Indian tribes is gaining strength daily... and will amply requite us for the justice and friendship practiced towards them ... [O]ne of the two great divisions of the Cherokee nation have now under consideration to solicit the citizenship of the United States, and to be identified with us in-laws and government, in such progressive manner as we shall think best.",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 18,
"text": "As some of Jefferson's other writings illustrate, however, he was ambivalent about Indian assimilation and used the words \"exterminate\" and \"extirpate\" about tribes who resisted American expansion and were willing to fight for their lands. Jefferson intended to change Indian lifestyles from hunting and gathering to farming, largely through \"the decrease of game rendering their subsistence by hunting insufficient\". He expected the change to agriculture to make them dependent on white Americans for goods, and more likely to surrender their land or allow themselves to be moved west of the Mississippi River. In an 1803 letter to William Henry Harrison, Jefferson wrote:",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 19,
"text": "Should any tribe be foolhardy enough to take up the hatchet at any time, the seizing the whole country of that tribe, and driving them across the Mississippi, as the only condition of peace, would be an example to others, and a furtherance of our final consolidation.",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 20,
"text": "In that letter, Jefferson spoke about protecting the Indians from injustices perpetrated by settlers:",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 21,
"text": "Our system is to live in perpetual peace with the Indians, to cultivate an affectionate attachment from them, by everything just and liberal which we can do for them within ... reason, and by giving them effectual protection against wrongs from our own people.",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 22,
"text": "According to the treaty of February 27, 1819, the U.S. government would offer citizenship and 640 acres (260 ha) of land per family to Cherokees who lived east of the Mississippi. Native American land was sometimes purchased, by treaty or under duress. The idea of land exchange, that Native Americans would give up their land east of the Mississippi in exchange for a similar amount of territory west of the river, was first proposed by Jefferson in 1803 and first incorporated into treaties in 1817 (years after the Jefferson presidency). The Indian Removal Act of 1830 included this concept.",
"title": "Jeffersonian policy"
},
{
"paragraph_id": 23,
"text": "Under President James Monroe, Secretary of War John C. Calhoun devised the first plans for Indian removal. Monroe approved Calhoun's plans by late 1824 and, in a special message to the Senate on January 27, 1825, requested the creation of the Arkansaw and Indian Territories; the Indians east of the Mississippi would voluntarily exchange their lands for lands west of the river. The Senate accepted Monroe's request, and asked Calhoun to draft a bill which was killed in the House of Representatives by the Georgia delegation. President John Quincy Adams assumed the Calhoun–Monroe policy, and was determined to remove the Indians by non-forceful means; Georgia refused to consent to Adams' request, forcing the president to forge a treaty with the Cherokees granting Georgia the Cherokee lands. On July 26, 1827, the Cherokee Nation adopted a written constitution (modeled on that of the United States) which declared that they were an independent nation with jurisdiction over their own lands. Georgia contended that it would not countenance a sovereign state within its own territory, and asserted its authority over Cherokee territory. When Andrew Jackson became president as the candidate of the newly-organized Democratic Party, he agreed that the Indians should be forced to exchange their eastern lands for western lands (including relocation) and vigorously enforced Indian removal.",
"title": "John C. Calhoun's plan"
},
{
"paragraph_id": 24,
"text": "Although Indian removal was a popular policy, it was also opposed on legal and moral grounds; it also ran counter to the formal, customary diplomatic interaction between the federal government and the Native nations. Ralph Waldo Emerson wrote the widely-published letter \"A Protest Against the Removal of the Cherokee Indians from the State of Georgia\" in 1838, shortly before the Cherokee removal. Emerson criticizes the government and its removal policy, saying that the removal treaty was illegitimate; it was a \"sham treaty\", which the U.S. government should not uphold. He describes removal as",
"title": "Opposition to removal from U.S. citizens"
},
{
"paragraph_id": 25,
"text": "such a dereliction of all faith and virtues, such a denial of justice…in the dealing of a nation with its own allies and wards since the earth was made…a general expression of despondency, of disbelief, that any goodwill accrues from a remonstrance on an act of fraud and robbery, appeared in those men to whom we naturally turn for aid and counsel.",
"title": "Opposition to removal from U.S. citizens"
},
{
"paragraph_id": 26,
"text": "Emerson concludes his letter by saying that it should not be a political issue, urging President Martin Van Buren to prevent the enforcement of Cherokee removal. Other individual settlers and settler social organizations throughout the United States also opposed removal.",
"title": "Opposition to removal from U.S. citizens"
},
{
"paragraph_id": 27,
"text": "Native groups reshaped their governments, made constitutions and legal codes, and sent delegates to Washington to negotiate policies and treaties to uphold their autonomy and ensure federally-promised protection from the encroachment of states. They thought that acclimating, as the U.S. wanted them to, would stem removal policy and create a better relationship with the federal government and surrounding states.",
"title": "Native American response to removal"
},
{
"paragraph_id": 28,
"text": "Native American nations had differing views about removal. Although most wanted to remain on their native lands and do anything possible to ensure that, others believed that removal to a nonwhite area was their only option to maintain their autonomy and culture. The U.S. used this division to forge removal treaties with (often) minority groups who became convinced that removal was the best option for their people. These treaties were often not acknowledged by most of a nation's people. When Congress ratified the removal treaty, the federal government could use military force to remove Native nations if they had not moved (or had begun moving) by the date stipulated in the treaty.",
"title": "Native American response to removal"
},
{
"paragraph_id": 29,
"text": "When Andrew Jackson became president of the United States in 1829, his government took a hard line on Indian removal; Jackson abandoned his predecessors' policy of treating Indian tribes as separate nations, aggressively pursuing all Indians east of the Mississippi who claimed constitutional sovereignty and independence from state laws. They were to be removed to reservations in Indian Territory, west of the Mississippi (present-day Oklahoma), where they could exist without state interference. At Jackson's request, Congress began a debate on an Indian-removal bill. After fierce disagreement, the Senate passed the bill by a 28–19 vote; the House had narrowly passed it, 102–97. Jackson signed the Indian Removal Act into law on May 30, 1830.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 30,
"text": "That year, most of the Five Civilized Tribes—the Chickasaw, Choctaw, Creek, Seminole, and Cherokee—lived east of the Mississippi. The Indian Removal Act implemented federal-government policy towards its Indian populations, moving Native American tribes east of the Mississippi to lands west of the river. Although the act did not authorize the forced removal of indigenous tribes, it enabled the president to negotiate land-exchange treaties.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 31,
"text": "On September 27, 1830, the Choctaw signed the Treaty of Dancing Rabbit Creek and became the first Native American tribe to be removed. The agreement was one of the largest transfers of land between the U.S. government and Native Americans which was not the result of war. The Choctaw signed away their remaining traditional homelands, opening them up for European–American settlement in Mississippi Territory. When the tribe reached Little Rock, a chief called its trek a \"trail of tears and death\".",
"title": "Indian Removal Act"
},
{
"paragraph_id": 32,
"text": "In 1831, French historian and political scientist Alexis de Tocqueville witnessed an exhausted group of Choctaw men, women and children emerging from the forest during an exceptionally cold winter near Memphis, Tennessee, on their way to the Mississippi to be loaded onto a steamboat. He wrote,",
"title": "Indian Removal Act"
},
{
"paragraph_id": 33,
"text": "In the whole scene there was an air of ruin and destruction, something which betrayed a final and irrevocable adieu; one couldn't watch without feeling one's heart wrung. The Indians were tranquil but sombre and taciturn. There was one who could speak English and of whom I asked why the Chactas were leaving their country. \"To be free,\" he answered, could never get any other reason out of him. We ... watch the expulsion ... of one of the most celebrated and ancient American peoples.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 34,
"text": "While the Indian Removal Act made the move of the tribes voluntary, it was often abused by government officials. The best-known example is the Treaty of New Echota, which was signed by a small faction of twenty Cherokee tribal members (not the tribal leadership) on December 29, 1835. Most of the Cherokee later blamed the faction and the treaty for the tribe's forced relocation in 1838. An estimated 4,000 Cherokee died in the march, which is known as the Trail of Tears. Missionary organizer Jeremiah Evarts urged the Cherokee Nation to take its case to the U.S. Supreme Court.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 35,
"text": "The Marshall court heard the case in Cherokee Nation v. Georgia (1831), but declined to rule on its merits; the court declaring that the Native American tribes were not sovereign nations, and could not \"maintain an action\" in U.S. courts. In an opinion written by Chief Justice Marshall in Worcester v. Georgia (1832), individual states had no authority in American Indian affairs.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 36,
"text": "The state of Georgia defied the Supreme Court ruling, and the desire of settlers and land speculators for Indian lands continued unabated; some whites claimed that Indians threatened peace and security. The Georgia legislature passed a law forbidding settlers from living on Indian territory after March 31, 1831, without a license from the state; this excluded missionaries who opposed Indian removal.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 37,
"text": "The Seminole refused to leave their Florida lands in 1835, leading to the Second Seminole War. Osceola was a Seminole leader of the people's fight against removal. Based in the Everglades, Osceola and his band used surprise attacks to defeat the U.S. Army in a number of battles. In 1837, Osceola was duplicitously captured by order of U.S. General Thomas Jesup when Osceola came under a flag of truce to negotiate peace near Fort Peyton. Osceola died in prison of illness; the war resulted in over 1,500 U.S. deaths, and cost the government $20 million. Some Seminole traveled deeper into the Everglades, and others moved west. The removal continued, and a number of wars broke out over land.In 1823, the Seminole signed the Treaty of Moultrie Creek, which reduced their 34 million to 4 millions acres.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 38,
"text": "In the aftermath of the Treaties of Fort Jackson, and the Washington, the Muscogee were confined to a small strip of land in present-day east central Alabama. The Creek national council signed the Treaty of Cusseta in 1832, ceding their remaining lands east of the Mississippi to the U.S. and accepting relocation to the Indian Territory. Most Muscogee were removed to the territory during the Trail of Tears in 1834, although some remained behind. Although the Creek War of 1836 ended government attempts to convince the Creek population to leave voluntarily, Creeks who had not participated in the war were not forced west (as others were). The Creek population was placed into camps and told that they would be relocated soon. Many Creek leaders were surprised by the quick departure but could do little to challenge it. The 16,000 Creeks were organized into five detachments who were to be sent to Fort Gibson. The Creek leaders did their best to negotiate better conditions, and succeeded in obtaining wagons and medicine. To prepare for the relocation, Creeks began to deconstruct their spiritual lives; they burned piles of lightwood over their ancestors' graves to honor their memories, and polished the sacred plates which would travel at the front of each group. They also prepared financially, selling what they could not bring. Many were swindled by local merchants out of valuable possessions (including land), and the military had to intervene. The detachments began moving west in September 1836, facing harsh conditions. Despite their preparations, the detachments faced bad roads, worse weather, and a lack of drinkable water. When all five detachments reached their destination, they recorded their death toll. The first detachment, with 2,318 Creeks, had 78 deaths; the second had 3,095 Creeks, with 37 deaths. The third had 2,818 Creeks, and 12 deaths; the fourth, 2,330 Creeks and 36 deaths. The fifth detachment, with 2,087 Creeks, had 25 deaths. In 1837 outside of Baton Rouge, Louisiana over 300 Creeks being forcibly removed to Western prairies drowned in the Mississippi River.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 39,
"text": "Friends and Brothers – By permission of the Great Spirit above, and the voice of the people, I have been made President of the United States, and now speak to you as your Father and friend,and request you to listen. Your warriors have known me long. You know I love my white and red children, and always speak with a straight, and not with a forked tongue; that I have always told you the truth ... Where you now are, you and my white children are too near to each other to live in harmony and peace. Your game is destroyed, and many of your people will not work and till the earth. Beyond the great River Mississippi, where a part of your nation has gone, your Father has provided a country large enough for all of you, and he advises you to remove to it. There your white brothers will not trouble you; they will have no claim to the land, and you can live upon it you and all your children, as long as the grass grows or the water runs, in peace and plenty. It will be yours forever. For the improvements in the country where you now live, and for all the stock which you cannot take with you, your Father will pay you a fair price ...",
"title": "Indian Removal Act"
},
{
"paragraph_id": 40,
"text": "Unlike other tribes, who exchanged lands, the Chickasaw were to receive financial compensation of $3 million from the United States for their lands east of the Mississippi River. They reached an agreement to purchase of land from the previously-removed Choctaw in 1836 after a bitter five-year debate, paying the Chocktaw $530,000 for the westernmost Choctaw land. Most of the Chickasaw moved in 1837 and 1838. The $3 million owed to the Chickasaw by the U.S. went unpaid for nearly 30 years.",
"title": "Indian Removal Act"
},
{
"paragraph_id": 41,
"text": "The Five Civilized Tribes were resettled in the new Indian Territory. The Cherokee occupied the northeast corner of the territory and a 70-mile-wide (110 km) strip of land in Kansas on its border with the territory. Some indigenous nations resisted the forced migration more strongly. The few who stayed behind eventually formed tribal groups, including the Eastern Band of Cherokee (based in North Carolina), the Mississippi Band of Choctaw Indians, the Seminole Tribe of Florida, and the Creeks in Alabama (including the Poarch Band).",
"title": "Indian Removal Act"
},
{
"paragraph_id": 42,
"text": "Tribes in the Old Northwest were smaller and more fragmented than the Five Civilized Tribes, so the treaty and emigration process was more piecemeal. Following the Northwest Indian War, most of the modern state of Ohio was taken from native nations in the 1795 Treaty of Greenville. Tribes such as the already-displaced Lenape (Delaware tribe), Kickapoo and Shawnee, were removed from Indiana, Michigan, and Ohio during the 1820s. The Potawatomi were forced out of Wisconsin and Michigan in late 1838, and were resettled in Kansas Territory. Communities remaining in present-day Ohio were forced to move to Louisiana, which was then controlled by Spain.",
"title": "Removals"
},
{
"paragraph_id": 43,
"text": "Bands of Shawnee, Ottawa, Potawatomi, Sauk, and Meskwaki (Fox) signed treaties and relocated to the Indian Territory. In 1832, the Sauk leader Black Hawk led a band of Sauk and Fox back to their lands in Illinois; the U.S. Army and Illinois militia defeated Black Hawk and his warriors in the Black Hawk War, and the Sauk and Fox were relocated to present-day Iowa. The Miami were split, with many of the tribe resettled west of the Mississippi River during the 1840s.",
"title": "Removals"
},
{
"paragraph_id": 44,
"text": "In the Second Treaty of Buffalo Creek (1838), the Senecas transferred all their land in New York (except for one small reservation) in exchange for 200,000 acres (810 km) of land in Indian Territory. The federal government would be responsible for the removal of the Senecas who opted to go west, and the Ogden Land Company would acquire their New York lands. The lands were sold by government officials, however, and the proceeds were deposited in the U.S. Treasury. Maris Bryant Pierce, a \"young chief\" served as a lawyer representing four territories of the Seneca tribe, starting in 1838. The Senecas asserted that they had been defrauded, and sued for redress in the Court of Claims. The case was not resolved until 1898, when the United States awarded $1,998,714.46 (~$60.3 million in 2022) in compensation to \"the New York Indians\". The U.S. signed treaties with the Senecas and the Tonawanda Senecas in 1842 and 1857, respectively. Under the treaty of 1857, the Tonawandas renounced all claim to lands west of the Mississippi in exchange for the right to buy back the Tonawanda Reservation from the Ogden Land Company. Over a century later, the Senecas purchased a 9-acre (3.6 ha) plot (part of their original reservation) in downtown Buffalo to build the Seneca Buffalo Creek Casino.",
"title": "Removals"
},
{
"paragraph_id": 45,
"text": "Historical views of Indian removal have been reevaluated since that time. Widespread contemporary acceptance of the policy, due in part to the popular embrace of the concept of manifest destiny, has given way to a more somber perspective. Historians have often described the removal of Native Americans as paternalism, ethnic cleansing, or genocide. Historian David Stannard has called it genocide.",
"title": "Changed perspective"
},
{
"paragraph_id": 46,
"text": "Andrew Jackson's Indian policy stirred a lot of public controversy before his enactment, but virtually none among historians and biographers of the 19th and early 20th century. However, his recent reputation has been negatively affected by his treatment of the Indians. Historians who admire Jackson's strong presidential leadership, such as Arthur M. Schlesinger, Jr., would gloss over the Indian Removal in a footnote. In 1969, Francis Paul Prucha defended Jackson's Indian policy and wrote that Jackson's removal of the Five Civilized Tribes from the hostile political environment of the Old South to Oklahoma probably saved them. Jackson was sharply attacked by political scientist Michael Rogin and historian Howard Zinn during the 1970s, primarily on this issue; Zinn called him an \"exterminator of Indians\". According to historians Paul R. Bartrop and Steven L. Jacobs, however, Jackson's policies do not meet the criteria for physical or cultural genocide. Historian Sean Wilentz describes the view of Jacksonian \"infantilization\" and \"genocide\" of the Indians, as a historical caricature, which \"turns tragedy into melodrama, exaggerates parts at the expense of the whole, and sacrifices nuance for sharpness\".",
"title": "Changed perspective"
}
]
| Indian removal was the United States government policy of forced displacement of self-governing tribes of Native Americans from their ancestral homelands in the eastern United States to lands west of the Mississippi River – specifically, to a designated Indian Territory. The Indian Removal Act, the key law which authorized the removal of Native tribes, was signed by Andrew Jackson in 1830. Although Jackson took a hard line on Indian removal, the law was enforced primarily during the Martin Van Buren administration. After the passage of the Indian Removal Act in 1830, approximately 60,000 members of the Cherokee, Muscogee (Creek), Seminole, Chickasaw, and Choctaw nations were forcibly removed from their ancestral homelands, with thousands dying during the Trail of Tears. Indian removal, a popular policy among incoming settlers, was a consequence of actions by European settlers in North America during the colonial period and then by the United States government until the mid-20th century. The policy traced its origins to the administration of James Monroe, although it addressed conflicts between European and Native Americans which had occurred since the 17th century and were escalating into the early 19th century. Historical views of Indian removal have been reevaluated since that time. Widespread contemporary acceptance of the policy, due in part to the popular embrace of the concept of manifest destiny, has given way to a more somber perspective. Historians have often described the removal of Native Americans as paternalism, ethnic cleansing, or genocide. | 2001-09-28T15:41:19Z | 2023-12-13T20:26:40Z | [
"Template:Cite journal",
"Template:Inflation/year",
"Template:Snd",
"Template:Blockquote",
"Template:ISBN",
"Template:Genocide topics",
"Template:Indigenous rights footer",
"Template:Native American topics sidebar",
"Template:Cite web",
"Template:Pn",
"Template:Infobox civilian attack",
"Template:Genocide of Indigenous peoples",
"Template:Main",
"Template:Reflist",
"Template:Indian Removal",
"Template:Authority control",
"Template:Short description",
"Template:Citation needed",
"Template:Cite book",
"Template:Use mdy dates",
"Template:US history",
"Template:Native American topics",
"Template:Convert",
"Template:See also",
"Template:Format price",
"Template:Andrew Jackson",
"Template:Anchor"
]
| https://en.wikipedia.org/wiki/Indian_removal |
15,081 | Green Party (Ireland) | The Green Party (Irish: Comhaontas Glas, lit. 'Green Alliance') is a green political party that operates in the Republic of Ireland and Northern Ireland. As other like-minded Green parties, it has eco-socialist/green left and more moderate factions. It holds a pro-European stance. It was founded as the Ecology Party of Ireland in 1981 by Dublin teacher Christopher Fettes. The party became the Green Alliance in 1983 and adopted its current English language name in 1987 while the Irish name was kept unchanged. The party leader is Eamon Ryan, and the deputy leader is Catherine Martin and the cathaoirleach (chairperson) is Pauline O'Reilly. Green Party candidates have been elected to most levels of representation: local government (in both the Republic and Northern Ireland), Dáil Éireann, the Northern Ireland Assembly, and the European Parliament.
The Green Party first entered the Dáil in 1989. It has participated in the Irish government twice, from 2007 to 2011 as junior partner in a coalition with Fianna Fáil, and since June 2020 in a coalition with Fianna Fáil and Fine Gael. Following the first period in government, the party suffered a wipeout in the February 2011 election, losing all six of its TDs. In the February 2016 election, it returned to the Dáil with two seats. Following this, Grace O'Sullivan was elected to the Seanad on 26 April that year of 2016 and Joe O'Brien was elected to Dáil Éireann in the 2019 Dublin Fingal by-election. In the 2020 general election, the party had its best result ever, securing 12 TDs and becoming the fourth largest party in Ireland.
The Green Party began life as the Ecology Party in 1981, with Christopher Fettes serving as the party's first chairperson. The party's first public appearance was modest: the event announced that they would be contesting the November 1982 general election, and was attended by their 7 election candidates, 20 party supporters, and one singular journalist. Fettes had opened the meeting by noting the party didn't expect to win any seats. Willy Clingan, the journalist present, recalled that "The Ecology Party introduced its seven election candidates at the nicest and most endearingly honest press conference of the whole campaign". The Ecology party took 0.2% of the vote that year.
Following a name change to the Green Alliance, it contested the 1984 European elections, with party founder Roger Garland winning 1.9% in the Dublin constituency. The following year, it won its first election when Marcus Counihan was elected to Killarney Urban District Council at the 1985 local elections, buoyed by winning 5,200 first preference votes as a European candidate in Dublin the previous year. The party nationally ran 34 candidates and won 0.6% of the vote.
The party continued to struggle until the 1989 general election when the Green Party (as it was now named) won its first seat in Dáil Éireann, when Roger Garland was elected in Dublin South. Garland lost his seat at the 1992 general election, while Trevor Sargent gained a seat in Dublin North. In the 1994 European election, Patricia McKenna topped the poll in the Dublin constituency and Nuala Ahern won a seat in Leinster. They retained their European Parliament seats in the 1999 European election, although the party lost five councillors in local elections held that year despite an increase in its vote. At the 1997 general election, the party gained a seat when John Gormley won a Dáil seat in Dublin South-East.
At the 2002 general election the party made a breakthrough, getting six Teachtaí Dála (TDs) elected to the Dáil with 4% of the national vote. However, in the 2004 European election, the party lost both of its European Parliament seats. In the 2004 local elections, it increased its number of councillors at county level from 8 to 18 (out of 883) and at town council level from 5 to 14 (out of 744).
The party gained its first representation in the Northern Ireland Assembly in 2007, the Green Party in Northern Ireland having become a regional branch of the party the previous year.
The Green Party entered government for the first time after the 2007 general election, held on 24 May. Although its share of first-preference votes increased at the election, the party failed to increase the number of TDs returned. Mary White won a seat for the first time in Carlow–Kilkenny; however, Dan Boyle lost his seat in Cork South-Central. The party had approached the 2007 general election on an independent platform, not ruling any out coalition partners while expressing its preference for an alternative to the outgoing coalition of Fianna Fáil and the Progressive Democrats. Neither the outgoing government nor an alternative of Fine Gael, Labour and the Green Party had sufficient seats to form a majority. Fine Gael ruled out a coalition arrangement with Sinn Féin, opening the way for Green Party negotiations with Fianna Fáil.
Before the negotiations began, Ciarán Cuffe TD wrote on his blog that "a deal with Fianna Fáil would be a deal with the devil… and [the Green Party would be] decimated as a Party". After protracted negotiations, a draft programme for government was agreed to between the Greens and Fianna Fáil. On 13 June 2007, Green members at the Mansion House in Dublin voted 86% in favour (441 to 67; with 2 spoilt votes) of entering coalition with Fianna Fáil. The following day, the six Green Party TDs voted for the re-election of Bertie Ahern as Taoiseach. New party leader John Gormley was appointed as Minister for the Environment, Heritage and Local Government and Eamon Ryan was appointed as Minister for Communications, Energy and Natural Resources. Trevor Sargent was appointed as Minister of State at the Department of Agriculture, Fisheries and Food with responsibility for Food and Horticulture.
Before its entry into government, the Green Party had been a vocal supporter of the Shell to Sea movement, the campaign to reroute the M3 motorway away from Tara and (to a lesser extent) the campaign to end United States military use of Shannon Airport. After the party entered government there were no substantive changes in government policy on these issues, which meant that Eamon Ryan oversaw the Corrib gas project while he was in office. The Green Party had, at its last annual conference, made an inquiry into the irregularities surrounding the project (see Corrib gas controversy) a precondition of entering government but changed its stance during post-election negotiations with Fianna Fáil.
The 2008 budget did not include a carbon levy on fuels such as petrol, diesel and home heating oil, which the Green Party had sought before the election. A carbon levy was, however, introduced in the 2010 Budget. The 2008 budget did include a separate carbon budget announced by Gormley, which introduced new energy efficiency tax credit, a ban on incandescent bulbs from January 2009, a tax scheme incentivising commuters' purchases of bicycles and a new scale of vehicle registration tax based on carbon emissions.
At a special convention on whether to support the Treaty of Lisbon on 19 January 2008, the party voted 63.5% in favour of supporting the Treaty; this fell short of the party's two-thirds majority requirement for policy issues. As a result, the Green Party did not have an official campaign in the first Lisbon Treaty referendum, although individual members were involved on different sides. The referendum did not pass in 2008, and following the Irish government's negotiation with EU member states of additional legal guarantees and assurances, the Green Party held another special convention meeting in Dublin on 18 July 2009 to decide its position on the second Lisbon referendum. Precisely two-thirds of party members present voted to campaign for a 'Yes' in the referendum. This was the first time in the party's history that it had campaigned in favour of a European treaty.
The government's response to the post-2008 banking crisis significantly affected the party's support, and it suffered at the 2009 local elections, returning with only three County Council seats in total and losing its entire traditional Dublin base, with the exception of a Town Council seat in Balbriggan.
Déirdre de Búrca, one of two Green Senators nominated by Taoiseach Bertie Ahern in 2007, resigned from the party and her seat in 2010, in part owing to the party's inability to secure her a job in the European Commission. On 23 February 2010, Trevor Sargent resigned as Minister of State for Food and Horticulture owing to allegations over contacting Gardaí about a criminal case involving a constituent, with Ciarán Cuffe being appointed as his replacement the following March.
The Green Party supported the passage of legislation for EC–ECB–IMF financial support for Ireland's bank bailout. On 19 January, the party derailed Taoiseach Brian Cowen's plans to reshuffle his cabinet when it refused to endorse Cowen's intended replacement ministers, forcing Cowen to redistribute the vacant portfolios among incumbent ministers. The Greens were angered at not having been consulted about this effort, and went as far as to threaten to pull out of the coalition unless Cowen set a firm date for an election due that spring. He ultimately set the date for 11 March.
On 23 January 2011, the Green Party met with Cowen following his resignation as leader of senior coalition partner Fianna Fáil the previous afternoon. The Green Party then announced it was breaking off the coalition and going into opposition with immediate effect. Ministers Gormley and Ryan resigned as cabinet ministers, and Cuffe and White resigned as Ministers of State. Green Party leader John Gormley said at a press conference announcing the withdrawal:
For a very long time we in the Green Party have stood back in the hope that Fianna Fáil could resolve persistent doubts about their party leadership. A definitive resolution of this has not yet been possible. And our patience has reached an end.
In almost four years in Government, from 2007 to 2011, the Green Party contributed to the passage of civil partnership for same-sex couples, the introduction of major planning reform, a major increase in renewable energy output, progressive budgets, and a nationwide scheme of home insulation retrofitting.
The party suffered a wipeout at the 2011 general election, with all of its six TDs losing their seats, including those of former Ministers John Gormley and Eamon Ryan. Three of their six incumbent TDs lost their deposits. The party's share of the vote fell below 2%, meaning that they could not reclaim election expenses, and their lack of parliamentary representation led to the ending of state funding for the party. The party candidates in the 2011 election to the Seanad were Dan Boyle and Niall Ó Brolcháin; neither was elected, and as a result, for the first time since 1989 the Green Party had no representatives in the Oireachtas.
In the aftermath of the wipeout Eamon Ryan was elected as party leader on 27 May 2011, succeeding John Gormley, while Catherine Martin was later appointed the deputy leader of the party.
At the 2016 general election Ryan and Martin gained two seats in the Dáil while Grace O'Sullivan picked up a seat in the Seanad. In doing so the Green party became the first Irish political party to lose all their seats in a general election but come back and win seats in a subsequent election. The Greens continued to pick up momentum in 2019, performing quite well in May during the concurrent 2019 local elections and 2019 European Parliament election while in November that same year the party saw Pippa Hackett capture a seat in the Seanad and Joe O'Brien bring home the party's first ever by-election win as a result of the 2019 Dublin Fingal by-election.
At the 2020 general election, the party had its best result ever, winning 7.1% of the first-preference votes and returning 12 TDs, an increase of ten from the last election. It became the fourth-largest party in the Dáil and entered government in coalition with Fianna Fáil and Fine Gael. Ryan, Martin and Roderic O'Gorman were appointed as cabinet ministers, with four Green Ministers of State. Clare Bailey, the leader of the Green Party in Northern Ireland, was amongst a number of Green members who stood against the coalition. She said it proposed the "most fiscally conservative arrangements in a generation" and that "the economic and finances behind this deal will really lead to some of the most vulnerable being hit the hardest", as well as it not doing enough on climate and social justice. She also said the deal "fails to deliver on our promise to tackle homelessness and provide better healthcare", "represents an unjust recovery" and "sets out an inadequate and vague pathway towards climate action". The party returned two senators at the 2020 Seanad election, with a further two senators nominated by the Taoiseach, Micheál Martin bringing the total party representation in the Oireachtas to 16. In July 2020, Eamon Ryan retained his leadership of the party with a narrow leadership election victory over Catherine Martin in the 2020 Green Party leadership election by 994 votes to 946, a margin of 48 votes.
Despite the success at the general election, the party found itself dogged by infighting and resignations afterwards. Prominent member Saoirse McHugh, a candidate in the 2019 European elections, 2020 general election and the 2020 Seanad election, resigned from the party upon the Greens entering government with Fine Gael and Fianna Fáil, parties she believed would damage public enthusiasm for environmentalist policies by pairing them with "socially regressive" policies. Over the course of 2020, 4 councillors as well as both the leader of the Young Greens and the leader of the Queer Greens would also depart from the party, all citing either bullying within the party or dissatisfaction with the coalition and its policies as the cause. Amongst the resignations were councillors Lorna Bogue and Liam Sinclair, who subsequently formed a new left-wing green party called An Rabharta Glas – Green Left in June 2021. Infighting continued in 2021 over attempts by Green Chairperson Hazel Chu to run for the Seanad. In May 2022, Green TDs Neasa Hourigan and Patrick Costello were suspended from the party for six months after they went against the party whip and voted of an opposition motion calling for the new National Maternity Hospital to be built on land wholly owned by the state. Hourigan was suspended again in March 2023, this time for 15 months, after she voted against the government on the issue of ending a ban on evictions.
On 23 July 2021, one of the Greens' flagship policies, the Climate Action and Low Carbon Development (Amendment) Bill 2021, was signed into law by the President. The bill creates a legally binding path to net zero emissions by 2050. Five-year carbon budgets produced by the Climate Change Advisory Council will dictate the path to carbon neutrality, with the aim of the first two budgets creating a 51% reduction by 2030. The five-year budgets will not be legally binding.
The Green Party has seven "founding principles", which are:
Broadly, these founding principles reflect the "four pillars" of green politics observed by the majority of Green Parties internationally: ecological wisdom, social justice, grassroots democracy, and nonviolence. They also reflect the six guiding principles of the Global Greens, which also includes a respect for diversity as a principle.
While strongly associated with environmentalist policies, the party also has policies covering all other key areas. These include protection of the Irish language, lowering the voting age in Ireland to 16, a directly elected Seanad, support for universal healthcare, and a constitutional amendment which guarantees that the water of Ireland will never be privatised. The party also advocates that terminally ill people should have the right to legally choose assisted dying, stating "provisions should apply only to those with a terminal illness which is likely to result in death within six months". It also states that "such a right would only apply where the person has a clear and settled intention to end their own life which is proved by making, and signing, a written declaration to that effect. Such a declaration must be countersigned by two qualified doctors".
In parallel to other Green Parties in Europe, the 1980s and 1990s saw a division within the Irish Green Party between two factions; the "Realists" (nicknamed the "Realos") and the "Fundamentalists (nicknamed the "Fundies"). The 'Realists' advocated taking a pragmatic approach to politics, which would mean having to accept some compromises on policy in order to get party members elected and into government in order to enact change. The 'Fundamentalists' advocated more radical policies and rejected appeals for pragmatism, citing that the looming effects of Climate Change would leave no time for compromise. Following a national convention in 1998 which saw a realist majority of members defeat a minority of fundamentalist members on a number of votes, and the party subsequently enter government for the first time in 2007, the factionalism of the 'Realists vs the Fundamentalists' was seen to have wilted away with the 'Realists' becoming the ascendent faction. However, in some respects, the division only laid dormant.
Following the 2019 local elections and the 2020 general election, the party had more elected representatives than ever before as well as its highest ever membership. On 22 July 2020, several prominent members of the party formed the "Just Transition Greens", an affiliate group within the party with a green left/eco-socialist outlook, who have the objective of moving the party towards policies based on the concept of a "Just Transition". During the 2020 Green Party leadership election, a significant aspect of the candidacy of Catherine Martin was that it was suggested that Martin could better represent the views of these individuals within the party than the incumbent Eamon Ryan.
The National Executive Committee is the organising committee of the party. It comprises the party leader Eamon Ryan, the deputy leader Catherine Martin, the Cathaoirleach Pauline O'Reilly, the National Coordinator, the General Secretary (in a non-voting role), a Young Greens representative, the Treasurer and ten members elected annually at the party convention.
The party did not have a national leader until 2001. At a special "Leadership Convention" in Kilkenny on 6 October 2001, Trevor Sargent was elected the first official leader of the Green Party while Mary White was elected deputy leader. Sargent was re-elected to his position in 2003 and again in 2005. The party's constitution requires that a leadership election be held within six months of a general election.
Sargent resigned the leadership in the wake of the 2007 general election to the 30th Dáil. During the campaign, Sargent had promised that he would not lead the party into Government with Fianna Fáil. At the election the party retained six Dáil seats, making it the most likely partner for Fianna Fáil. Sargent and the party negotiated a coalition government; at the 12 June 2007 membership meeting to approve the agreement, he announced his resignation as leader.
In the subsequent leadership election, John Gormley became the new leader on 17 July 2007, defeating Patricia McKenna by 478 votes to 263. Mary White was subsequently re-elected as the deputy Leader. Gormley served as Minister for the Environment, Heritage and Local Government from July 2007 until the Green Party's decision to exit government in December 2010.
Following the election defeats of 2011, Gormley announced his intention not to seek another term as Green Party leader. Eamon Ryan was elected as the new party leader, over party colleagues Phil Kearney and Cllr Malcolm Noonan in a postal ballot election of party members in May 2011. Monaghan-based former councillor Catherine Martin defeated Down-based Dr John Barry and former Senator Mark Dearey to the post of deputy leader on 11 June 2011 during the party's annual convention. Roderic O'Gorman was elected party chairperson.
The Green Party lost all its Dáil seats in the 2011 general election. Party Chairman Dan Boyle and Déirdre de Búrca were nominated by the Taoiseach to Seanad Éireann after the formation of the Fianna Fáil–Progressive Democrats–Green Party government in 2007, and Niall Ó Brolcháin was elected in December 2009. De Búrca resigned in February 2010, and was replaced by Mark Dearey. Neither Boyle nor O'Brolchain was re-elected to Seanad Éireann in the Seanad election of 2011, leaving the Green Party without Oireachtas representation until the 2016 general election, in which it regained two Dáil seats.
Ryan's leadership was challenged by deputy leader Catherine Martin in 2020 after the 2020 government formation; he narrowly won a poll of party members, 994 votes (51.2%) to 946.
The Green Party is organised throughout the island of Ireland, with regional structures in both the Republic of Ireland and Northern Ireland. The Green Party in Northern Ireland voted to become a regional partner of the Green Party in Ireland in 2005 at its annual convention, and again in a postal ballot in March 2006. Brian Wilson, formerly a councillor for the Alliance Party, won the Green Party's first seat in the Northern Ireland Assembly in the 2007 election. Steven Agnew held that seat in the 2011 election. | [
{
"paragraph_id": 0,
"text": "The Green Party (Irish: Comhaontas Glas, lit. 'Green Alliance') is a green political party that operates in the Republic of Ireland and Northern Ireland. As other like-minded Green parties, it has eco-socialist/green left and more moderate factions. It holds a pro-European stance. It was founded as the Ecology Party of Ireland in 1981 by Dublin teacher Christopher Fettes. The party became the Green Alliance in 1983 and adopted its current English language name in 1987 while the Irish name was kept unchanged. The party leader is Eamon Ryan, and the deputy leader is Catherine Martin and the cathaoirleach (chairperson) is Pauline O'Reilly. Green Party candidates have been elected to most levels of representation: local government (in both the Republic and Northern Ireland), Dáil Éireann, the Northern Ireland Assembly, and the European Parliament.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Green Party first entered the Dáil in 1989. It has participated in the Irish government twice, from 2007 to 2011 as junior partner in a coalition with Fianna Fáil, and since June 2020 in a coalition with Fianna Fáil and Fine Gael. Following the first period in government, the party suffered a wipeout in the February 2011 election, losing all six of its TDs. In the February 2016 election, it returned to the Dáil with two seats. Following this, Grace O'Sullivan was elected to the Seanad on 26 April that year of 2016 and Joe O'Brien was elected to Dáil Éireann in the 2019 Dublin Fingal by-election. In the 2020 general election, the party had its best result ever, securing 12 TDs and becoming the fourth largest party in Ireland.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Green Party began life as the Ecology Party in 1981, with Christopher Fettes serving as the party's first chairperson. The party's first public appearance was modest: the event announced that they would be contesting the November 1982 general election, and was attended by their 7 election candidates, 20 party supporters, and one singular journalist. Fettes had opened the meeting by noting the party didn't expect to win any seats. Willy Clingan, the journalist present, recalled that \"The Ecology Party introduced its seven election candidates at the nicest and most endearingly honest press conference of the whole campaign\". The Ecology party took 0.2% of the vote that year.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Following a name change to the Green Alliance, it contested the 1984 European elections, with party founder Roger Garland winning 1.9% in the Dublin constituency. The following year, it won its first election when Marcus Counihan was elected to Killarney Urban District Council at the 1985 local elections, buoyed by winning 5,200 first preference votes as a European candidate in Dublin the previous year. The party nationally ran 34 candidates and won 0.6% of the vote.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The party continued to struggle until the 1989 general election when the Green Party (as it was now named) won its first seat in Dáil Éireann, when Roger Garland was elected in Dublin South. Garland lost his seat at the 1992 general election, while Trevor Sargent gained a seat in Dublin North. In the 1994 European election, Patricia McKenna topped the poll in the Dublin constituency and Nuala Ahern won a seat in Leinster. They retained their European Parliament seats in the 1999 European election, although the party lost five councillors in local elections held that year despite an increase in its vote. At the 1997 general election, the party gained a seat when John Gormley won a Dáil seat in Dublin South-East.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "At the 2002 general election the party made a breakthrough, getting six Teachtaí Dála (TDs) elected to the Dáil with 4% of the national vote. However, in the 2004 European election, the party lost both of its European Parliament seats. In the 2004 local elections, it increased its number of councillors at county level from 8 to 18 (out of 883) and at town council level from 5 to 14 (out of 744).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The party gained its first representation in the Northern Ireland Assembly in 2007, the Green Party in Northern Ireland having become a regional branch of the party the previous year.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Green Party entered government for the first time after the 2007 general election, held on 24 May. Although its share of first-preference votes increased at the election, the party failed to increase the number of TDs returned. Mary White won a seat for the first time in Carlow–Kilkenny; however, Dan Boyle lost his seat in Cork South-Central. The party had approached the 2007 general election on an independent platform, not ruling any out coalition partners while expressing its preference for an alternative to the outgoing coalition of Fianna Fáil and the Progressive Democrats. Neither the outgoing government nor an alternative of Fine Gael, Labour and the Green Party had sufficient seats to form a majority. Fine Gael ruled out a coalition arrangement with Sinn Féin, opening the way for Green Party negotiations with Fianna Fáil.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Before the negotiations began, Ciarán Cuffe TD wrote on his blog that \"a deal with Fianna Fáil would be a deal with the devil… and [the Green Party would be] decimated as a Party\". After protracted negotiations, a draft programme for government was agreed to between the Greens and Fianna Fáil. On 13 June 2007, Green members at the Mansion House in Dublin voted 86% in favour (441 to 67; with 2 spoilt votes) of entering coalition with Fianna Fáil. The following day, the six Green Party TDs voted for the re-election of Bertie Ahern as Taoiseach. New party leader John Gormley was appointed as Minister for the Environment, Heritage and Local Government and Eamon Ryan was appointed as Minister for Communications, Energy and Natural Resources. Trevor Sargent was appointed as Minister of State at the Department of Agriculture, Fisheries and Food with responsibility for Food and Horticulture.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Before its entry into government, the Green Party had been a vocal supporter of the Shell to Sea movement, the campaign to reroute the M3 motorway away from Tara and (to a lesser extent) the campaign to end United States military use of Shannon Airport. After the party entered government there were no substantive changes in government policy on these issues, which meant that Eamon Ryan oversaw the Corrib gas project while he was in office. The Green Party had, at its last annual conference, made an inquiry into the irregularities surrounding the project (see Corrib gas controversy) a precondition of entering government but changed its stance during post-election negotiations with Fianna Fáil.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 2008 budget did not include a carbon levy on fuels such as petrol, diesel and home heating oil, which the Green Party had sought before the election. A carbon levy was, however, introduced in the 2010 Budget. The 2008 budget did include a separate carbon budget announced by Gormley, which introduced new energy efficiency tax credit, a ban on incandescent bulbs from January 2009, a tax scheme incentivising commuters' purchases of bicycles and a new scale of vehicle registration tax based on carbon emissions.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "At a special convention on whether to support the Treaty of Lisbon on 19 January 2008, the party voted 63.5% in favour of supporting the Treaty; this fell short of the party's two-thirds majority requirement for policy issues. As a result, the Green Party did not have an official campaign in the first Lisbon Treaty referendum, although individual members were involved on different sides. The referendum did not pass in 2008, and following the Irish government's negotiation with EU member states of additional legal guarantees and assurances, the Green Party held another special convention meeting in Dublin on 18 July 2009 to decide its position on the second Lisbon referendum. Precisely two-thirds of party members present voted to campaign for a 'Yes' in the referendum. This was the first time in the party's history that it had campaigned in favour of a European treaty.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The government's response to the post-2008 banking crisis significantly affected the party's support, and it suffered at the 2009 local elections, returning with only three County Council seats in total and losing its entire traditional Dublin base, with the exception of a Town Council seat in Balbriggan.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Déirdre de Búrca, one of two Green Senators nominated by Taoiseach Bertie Ahern in 2007, resigned from the party and her seat in 2010, in part owing to the party's inability to secure her a job in the European Commission. On 23 February 2010, Trevor Sargent resigned as Minister of State for Food and Horticulture owing to allegations over contacting Gardaí about a criminal case involving a constituent, with Ciarán Cuffe being appointed as his replacement the following March.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Green Party supported the passage of legislation for EC–ECB–IMF financial support for Ireland's bank bailout. On 19 January, the party derailed Taoiseach Brian Cowen's plans to reshuffle his cabinet when it refused to endorse Cowen's intended replacement ministers, forcing Cowen to redistribute the vacant portfolios among incumbent ministers. The Greens were angered at not having been consulted about this effort, and went as far as to threaten to pull out of the coalition unless Cowen set a firm date for an election due that spring. He ultimately set the date for 11 March.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "On 23 January 2011, the Green Party met with Cowen following his resignation as leader of senior coalition partner Fianna Fáil the previous afternoon. The Green Party then announced it was breaking off the coalition and going into opposition with immediate effect. Ministers Gormley and Ryan resigned as cabinet ministers, and Cuffe and White resigned as Ministers of State. Green Party leader John Gormley said at a press conference announcing the withdrawal:",
"title": "History"
},
{
"paragraph_id": 16,
"text": "For a very long time we in the Green Party have stood back in the hope that Fianna Fáil could resolve persistent doubts about their party leadership. A definitive resolution of this has not yet been possible. And our patience has reached an end.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In almost four years in Government, from 2007 to 2011, the Green Party contributed to the passage of civil partnership for same-sex couples, the introduction of major planning reform, a major increase in renewable energy output, progressive budgets, and a nationwide scheme of home insulation retrofitting.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The party suffered a wipeout at the 2011 general election, with all of its six TDs losing their seats, including those of former Ministers John Gormley and Eamon Ryan. Three of their six incumbent TDs lost their deposits. The party's share of the vote fell below 2%, meaning that they could not reclaim election expenses, and their lack of parliamentary representation led to the ending of state funding for the party. The party candidates in the 2011 election to the Seanad were Dan Boyle and Niall Ó Brolcháin; neither was elected, and as a result, for the first time since 1989 the Green Party had no representatives in the Oireachtas.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In the aftermath of the wipeout Eamon Ryan was elected as party leader on 27 May 2011, succeeding John Gormley, while Catherine Martin was later appointed the deputy leader of the party.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "At the 2016 general election Ryan and Martin gained two seats in the Dáil while Grace O'Sullivan picked up a seat in the Seanad. In doing so the Green party became the first Irish political party to lose all their seats in a general election but come back and win seats in a subsequent election. The Greens continued to pick up momentum in 2019, performing quite well in May during the concurrent 2019 local elections and 2019 European Parliament election while in November that same year the party saw Pippa Hackett capture a seat in the Seanad and Joe O'Brien bring home the party's first ever by-election win as a result of the 2019 Dublin Fingal by-election.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "At the 2020 general election, the party had its best result ever, winning 7.1% of the first-preference votes and returning 12 TDs, an increase of ten from the last election. It became the fourth-largest party in the Dáil and entered government in coalition with Fianna Fáil and Fine Gael. Ryan, Martin and Roderic O'Gorman were appointed as cabinet ministers, with four Green Ministers of State. Clare Bailey, the leader of the Green Party in Northern Ireland, was amongst a number of Green members who stood against the coalition. She said it proposed the \"most fiscally conservative arrangements in a generation\" and that \"the economic and finances behind this deal will really lead to some of the most vulnerable being hit the hardest\", as well as it not doing enough on climate and social justice. She also said the deal \"fails to deliver on our promise to tackle homelessness and provide better healthcare\", \"represents an unjust recovery\" and \"sets out an inadequate and vague pathway towards climate action\". The party returned two senators at the 2020 Seanad election, with a further two senators nominated by the Taoiseach, Micheál Martin bringing the total party representation in the Oireachtas to 16. In July 2020, Eamon Ryan retained his leadership of the party with a narrow leadership election victory over Catherine Martin in the 2020 Green Party leadership election by 994 votes to 946, a margin of 48 votes.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Despite the success at the general election, the party found itself dogged by infighting and resignations afterwards. Prominent member Saoirse McHugh, a candidate in the 2019 European elections, 2020 general election and the 2020 Seanad election, resigned from the party upon the Greens entering government with Fine Gael and Fianna Fáil, parties she believed would damage public enthusiasm for environmentalist policies by pairing them with \"socially regressive\" policies. Over the course of 2020, 4 councillors as well as both the leader of the Young Greens and the leader of the Queer Greens would also depart from the party, all citing either bullying within the party or dissatisfaction with the coalition and its policies as the cause. Amongst the resignations were councillors Lorna Bogue and Liam Sinclair, who subsequently formed a new left-wing green party called An Rabharta Glas – Green Left in June 2021. Infighting continued in 2021 over attempts by Green Chairperson Hazel Chu to run for the Seanad. In May 2022, Green TDs Neasa Hourigan and Patrick Costello were suspended from the party for six months after they went against the party whip and voted of an opposition motion calling for the new National Maternity Hospital to be built on land wholly owned by the state. Hourigan was suspended again in March 2023, this time for 15 months, after she voted against the government on the issue of ending a ban on evictions.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "On 23 July 2021, one of the Greens' flagship policies, the Climate Action and Low Carbon Development (Amendment) Bill 2021, was signed into law by the President. The bill creates a legally binding path to net zero emissions by 2050. Five-year carbon budgets produced by the Climate Change Advisory Council will dictate the path to carbon neutrality, with the aim of the first two budgets creating a 51% reduction by 2030. The five-year budgets will not be legally binding.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Green Party has seven \"founding principles\", which are:",
"title": "Ideology and policies"
},
{
"paragraph_id": 25,
"text": "Broadly, these founding principles reflect the \"four pillars\" of green politics observed by the majority of Green Parties internationally: ecological wisdom, social justice, grassroots democracy, and nonviolence. They also reflect the six guiding principles of the Global Greens, which also includes a respect for diversity as a principle.",
"title": "Ideology and policies"
},
{
"paragraph_id": 26,
"text": "While strongly associated with environmentalist policies, the party also has policies covering all other key areas. These include protection of the Irish language, lowering the voting age in Ireland to 16, a directly elected Seanad, support for universal healthcare, and a constitutional amendment which guarantees that the water of Ireland will never be privatised. The party also advocates that terminally ill people should have the right to legally choose assisted dying, stating \"provisions should apply only to those with a terminal illness which is likely to result in death within six months\". It also states that \"such a right would only apply where the person has a clear and settled intention to end their own life which is proved by making, and signing, a written declaration to that effect. Such a declaration must be countersigned by two qualified doctors\".",
"title": "Ideology and policies"
},
{
"paragraph_id": 27,
"text": "In parallel to other Green Parties in Europe, the 1980s and 1990s saw a division within the Irish Green Party between two factions; the \"Realists\" (nicknamed the \"Realos\") and the \"Fundamentalists (nicknamed the \"Fundies\"). The 'Realists' advocated taking a pragmatic approach to politics, which would mean having to accept some compromises on policy in order to get party members elected and into government in order to enact change. The 'Fundamentalists' advocated more radical policies and rejected appeals for pragmatism, citing that the looming effects of Climate Change would leave no time for compromise. Following a national convention in 1998 which saw a realist majority of members defeat a minority of fundamentalist members on a number of votes, and the party subsequently enter government for the first time in 2007, the factionalism of the 'Realists vs the Fundamentalists' was seen to have wilted away with the 'Realists' becoming the ascendent faction. However, in some respects, the division only laid dormant.",
"title": "Ideology and policies"
},
{
"paragraph_id": 28,
"text": "Following the 2019 local elections and the 2020 general election, the party had more elected representatives than ever before as well as its highest ever membership. On 22 July 2020, several prominent members of the party formed the \"Just Transition Greens\", an affiliate group within the party with a green left/eco-socialist outlook, who have the objective of moving the party towards policies based on the concept of a \"Just Transition\". During the 2020 Green Party leadership election, a significant aspect of the candidacy of Catherine Martin was that it was suggested that Martin could better represent the views of these individuals within the party than the incumbent Eamon Ryan.",
"title": "Ideology and policies"
},
{
"paragraph_id": 29,
"text": "The National Executive Committee is the organising committee of the party. It comprises the party leader Eamon Ryan, the deputy leader Catherine Martin, the Cathaoirleach Pauline O'Reilly, the National Coordinator, the General Secretary (in a non-voting role), a Young Greens representative, the Treasurer and ten members elected annually at the party convention.",
"title": "Organisation"
},
{
"paragraph_id": 30,
"text": "The party did not have a national leader until 2001. At a special \"Leadership Convention\" in Kilkenny on 6 October 2001, Trevor Sargent was elected the first official leader of the Green Party while Mary White was elected deputy leader. Sargent was re-elected to his position in 2003 and again in 2005. The party's constitution requires that a leadership election be held within six months of a general election.",
"title": "Organisation"
},
{
"paragraph_id": 31,
"text": "Sargent resigned the leadership in the wake of the 2007 general election to the 30th Dáil. During the campaign, Sargent had promised that he would not lead the party into Government with Fianna Fáil. At the election the party retained six Dáil seats, making it the most likely partner for Fianna Fáil. Sargent and the party negotiated a coalition government; at the 12 June 2007 membership meeting to approve the agreement, he announced his resignation as leader.",
"title": "Organisation"
},
{
"paragraph_id": 32,
"text": "In the subsequent leadership election, John Gormley became the new leader on 17 July 2007, defeating Patricia McKenna by 478 votes to 263. Mary White was subsequently re-elected as the deputy Leader. Gormley served as Minister for the Environment, Heritage and Local Government from July 2007 until the Green Party's decision to exit government in December 2010.",
"title": "Organisation"
},
{
"paragraph_id": 33,
"text": "Following the election defeats of 2011, Gormley announced his intention not to seek another term as Green Party leader. Eamon Ryan was elected as the new party leader, over party colleagues Phil Kearney and Cllr Malcolm Noonan in a postal ballot election of party members in May 2011. Monaghan-based former councillor Catherine Martin defeated Down-based Dr John Barry and former Senator Mark Dearey to the post of deputy leader on 11 June 2011 during the party's annual convention. Roderic O'Gorman was elected party chairperson.",
"title": "Organisation"
},
{
"paragraph_id": 34,
"text": "The Green Party lost all its Dáil seats in the 2011 general election. Party Chairman Dan Boyle and Déirdre de Búrca were nominated by the Taoiseach to Seanad Éireann after the formation of the Fianna Fáil–Progressive Democrats–Green Party government in 2007, and Niall Ó Brolcháin was elected in December 2009. De Búrca resigned in February 2010, and was replaced by Mark Dearey. Neither Boyle nor O'Brolchain was re-elected to Seanad Éireann in the Seanad election of 2011, leaving the Green Party without Oireachtas representation until the 2016 general election, in which it regained two Dáil seats.",
"title": "Organisation"
},
{
"paragraph_id": 35,
"text": "Ryan's leadership was challenged by deputy leader Catherine Martin in 2020 after the 2020 government formation; he narrowly won a poll of party members, 994 votes (51.2%) to 946.",
"title": "Organisation"
},
{
"paragraph_id": 36,
"text": "The Green Party is organised throughout the island of Ireland, with regional structures in both the Republic of Ireland and Northern Ireland. The Green Party in Northern Ireland voted to become a regional partner of the Green Party in Ireland in 2005 at its annual convention, and again in a postal ballot in March 2006. Brian Wilson, formerly a councillor for the Alliance Party, won the Green Party's first seat in the Northern Ireland Assembly in the 2007 election. Steven Agnew held that seat in the 2011 election.",
"title": "Organisation"
}
]
| The Green Party is a green political party that operates in the Republic of Ireland and Northern Ireland. As other like-minded Green parties, it has eco-socialist/green left and more moderate factions. It holds a pro-European stance. It was founded as the Ecology Party of Ireland in 1981 by Dublin teacher Christopher Fettes. The party became the Green Alliance in 1983 and adopted its current English language name in 1987 while the Irish name was kept unchanged. The party leader is Eamon Ryan, and the deputy leader is Catherine Martin and the cathaoirleach (chairperson) is Pauline O'Reilly. Green Party candidates have been elected to most levels of representation: local government, Dáil Éireann, the Northern Ireland Assembly, and the European Parliament. The Green Party first entered the Dáil in 1989. It has participated in the Irish government twice, from 2007 to 2011 as junior partner in a coalition with Fianna Fáil, and since June 2020 in a coalition with Fianna Fáil and Fine Gael. Following the first period in government, the party suffered a wipeout in the February 2011 election, losing all six of its TDs. In the February 2016 election, it returned to the Dáil with two seats. Following this, Grace O'Sullivan was elected to the Seanad on 26 April that year of 2016 and Joe O'Brien was elected to Dáil Éireann in the 2019 Dublin Fingal by-election. In the 2020 general election, the party had its best result ever, securing 12 TDs and becoming the fourth largest party in Ireland. | 2001-09-28T06:06:08Z | 2023-12-25T10:07:06Z | [
"Template:Infobox political party",
"Template:Reflist",
"Template:Portal",
"Template:Green Party (Ireland)",
"Template:Use Hiberno-English",
"Template:Columns-list",
"Template:Increase",
"Template:Dead link",
"Template:Webarchive",
"Template:Spaced ndash",
"Template:Official website",
"Template:Green politics",
"Template:Authority control",
"Template:CSS image crop",
"Template:Cite news",
"Template:Green parties",
"Template:Short description",
"Template:For",
"Template:Composition bar",
"Template:No2",
"Template:Yes2",
"Template:Decrease",
"Template:Lang-ga",
"Template:Main",
"Template:Blockquote",
"Template:Cite web",
"Template:Cbignore",
"Template:Political parties in Ireland",
"Template:Steady",
"Template:Use dmy dates",
"Template:Citation needed"
]
| https://en.wikipedia.org/wiki/Green_Party_(Ireland) |
15,085 | Iconoclasm | Iconoclasm (from Greek: εἰκών, eikṓn, 'figure, icon' + κλάω, kláō, 'to break') is the social belief in the importance of the destruction of icons and other images or monuments, most frequently for religious or political reasons. People who engage in or support iconoclasm are called iconoclasts, a term that has come to be figuratively applied to any individual who challenges "cherished beliefs or venerated institutions on the grounds that they are erroneous or pernicious."
Conversely, one who reveres or venerates religious images is called (by iconoclasts) an iconolater; in a Byzantine context, such a person is called an iconodule or iconophile. Iconoclasm does not generally encompass the destruction of the images of a specific ruler after his or her death or overthrow, a practice better known as damnatio memoriae.
While iconoclasm may be carried out by adherents of a different religion, it is more commonly the result of sectarian disputes between factions of the same religion. The term originates from the Byzantine Iconoclasm, the struggles between proponents and opponents of religious icons in the Byzantine Empire from 726 to 842 AD. Degrees of iconoclasm vary greatly among religions and their branches, but are strongest in religions which oppose idolatry, including the Abrahamic religions. Outside of the religious context, iconoclasm can refer to movements for widespread destruction in symbols of an ideology or cause, such as the destruction of monarchist symbols during the French Revolution.
In the Bronze Age, the most significant episode of iconoclasm occurred in Egypt during the Amarna Period, when Akhenaten, based in his new capital of Akhetaten, instituted a significant shift in Egyptian artistic styles alongside a campaign of intolerance towards the traditional gods and a new emphasis on a state monolatristic tradition focused on the god Aten, the Sun disk—many temples and monuments were destroyed as a result:
In rebellion against the old religion and the powerful priests of Amun, Akhenaten ordered the eradication of all of Egypt's traditional gods. He sent royal officials to chisel out and destroy every reference to Amun and the names of other deities on tombs, temple walls, and cartouches to instill in the people that the Aten was the one true god.
Public references to Akhenaten were destroyed soon after his death. Comparing the ancient Egyptians with the Israelites, Jan Assmann writes:
For Egypt, the greatest horror was the destruction or abduction of the cult images. In the eyes of the Israelites, the erection of images meant the destruction of divine presence; in the eyes of the Egyptians, this same effect was attained by the destruction of images. In Egypt, iconoclasm was the most terrible religious crime; in Israel, the most terrible religious crime was idolatry. In this respect Osarseph alias Akhenaten, the iconoclast, and the Golden Calf, the paragon of idolatry, correspond to each other inversely, and it is strange that Aaron could so easily avoid the role of the religious criminal. It is more than probable that these traditions evolved under mutual influence. In this respect, Moses and Akhenaten became, after all, closely related.
According to the Hebrew Bible, God instructed the Israelites to "destroy all [the] engraved stones, destroy all [the] molded images, and demolish all [the] high places" of the indigenous Canaanite population as soon as they entered the Promised Land.
In Judaism, King Hezekiah purged Solomon's Temple in Jerusalem and all figures were also destroyed in the Land of Israel, including the Nehushtan, as recorded in the Second Book of Kings. His reforms were reversed in the reign of his son Manasseh.
Scattered expressions of opposition to the use of images have been reported: the Synod of Elvira appeared to endorse iconoclasm; Canon 36 states, "Pictures are not to be placed in churches, so that they do not become objects of worship and adoration." A possible translation is also: "There shall be no pictures in the church, lest what is worshipped and adored should be depicted on the walls." The date of this canon is disputed. Proscription ceased after the destruction of pagan temples. However, widespread use of Christian iconography only began as Christianity increasingly spread among Gentiles after the legalization of Christianity by Roman Emperor Constantine (c. 312 AD). During the process of Christianisation under Constantine, Christian groups destroyed the images and sculptures expressive of the Roman Empire's polytheist state religion.
Among early church theologians, iconoclastic tendencies were supported by theologians such as: Tertullian, Clement of Alexandria, Origen, Lactantius, Justin Martyr, Eusebius and Epiphanius.
The period after the reign of Byzantine Emperor Justinian (527–565) evidently saw a huge increase in the use of images, both in volume and quality, and a gathering aniconic reaction.
One notable change within the Byzantine Empire came in 695, when Justinian II's government added a full-face image of Christ on the obverse of imperial gold coins. The change caused the Caliph Abd al-Malik to stop his earlier adoption of Byzantine coin types. He started a purely Islamic coinage with lettering only. A letter by the Patriarch Germanus, written before 726 to two iconoclast bishops, says that "now whole towns and multitudes of people are in considerable agitation over this matter," but there is little written evidence of the debate.
Government-led iconoclasm began with Byzantine Emperor Leo III, who issued a series of edicts between 726 and 730 against the veneration of images. The religious conflict created political and economic divisions in Byzantine society; iconoclasm was generally supported by the Eastern, poorer, non-Greek peoples of the Empire who had to frequently deal with raids from the new Muslim Empire. On the other hand, the wealthier Greeks of Constantinople and the peoples of the Balkan and Italian provinces strongly opposed iconoclasm.
Peter of Bruys opposed the usage of religious images, the Strigolniki were also possibly iconoclastic. Claudius of Turin was the bishop of Turin from 817 until his death. He is most noted for teaching iconoclasm.
The first iconoclastic wave happened in Wittenberg in the early 1520s under reformers Thomas Müntzer and Andreas Karlstadt, in the absence of Martin Luther, who then, concealed under the pen-name of 'Junker Jörg', intervened to calm things down. Luther argued that the mental picturing of Christ when reading the Scriptures was similar in character to artistic renderings of Christ.
In contrast to the Lutherans who favoured certain types of sacred art in their churches and homes, the Reformed (Calvinist) leaders, in particular Andreas Karlstadt, Huldrych Zwingli and John Calvin, encouraged the removal of religious images by invoking the Decalogue's prohibition of idolatry and the manufacture of graven (sculpted) images of God. As a result, individuals attacked statues and images, most famously in the beeldenstorm across the Low Countries in 1566. However, in most cases, civil authorities removed images in an orderly manner in the newly Reformed Protestant cities and territories of Europe.
The belief of iconoclasm caused havoc throughout Europe. In 1523, specifically due to the Swiss reformer Huldrych Zwingli, a vast number of his followers viewed themselves as being involved in a spiritual community that in matters of faith should obey neither the visible Church nor lay authorities. According to Peter George Wallace "Zwingli's attack on images, at the first debate, triggered iconoclastic incidents in Zurich and the villages under civic jurisdiction that the reformer was unwilling to condone." Due to this action of protest against authority, "Zwingli responded with a carefully reasoned treatise that men could not live in society without laws and constraint".
Significant iconoclastic riots took place in Basel (in 1529), Zurich (1523), Copenhagen (1530), Münster (1534), Geneva (1535), Augsburg (1537), Scotland (1559), Rouen (1560), and Saintes and La Rochelle (1562). Calvinist iconoclasm in Europe "provoked reactive riots by Lutheran mobs" in Germany and "antagonized the neighbouring Eastern Orthodox" in the Baltic region.
The Seventeen Provinces (now the Netherlands, Belgium, and parts of Northern France) were disrupted by widespread Calvinist iconoclasm in the summer of 1566. This period, known as the Beeldenstorm, began with the destruction of the statuary of the Monastery of Saint Lawrence in Steenvoorde after a "Hagenpreek," or field sermon, by Sebastiaan Matte on 10 August 1566; by October the wave of furor had gone all through the Spanish Netherlands up to Groningen. Hundreds of other attacks included the sacking of the Monastery of Saint Anthony after a sermon by Jacob de Buysere. The Beeldenstorm marked the start of the revolution against the Spanish forces and the Catholic Church.
During the Reformation in England, which started during the reign of Anglican monarch Henry VIII, and was urged on by reformers such as Hugh Latimer and Thomas Cranmer, limited official action was taken against religious images in churches in the late 1530s. Henry's young son, Edward VI, came to the throne in 1547 and, under Cranmer's guidance, issued injunctions for Religious Reforms in the same year and in 1550, an Act of Parliament "for the abolition and putting away of divers books and images."
During the English Civil War, the Parliamentarians reorganised the administration of East Anglia into the Eastern Association of counties. This covered some of the wealthiest counties in England, which in turn financed a substantial and significant military force. After Earl of Manchester was appointed the commanding officer of these forces, and in turn he appointed Smasher Dowsing as Provost Marshal, with a warrant to demolish religious images which were considered to be superstitious or linked with popism. Bishop Joseph Hall of Norwich described the events of 1643 when troops and citizens, encouraged by a Parliamentary ordinance against superstition and idolatry, behaved thus:
Lord what work was here! What clattering of glasses! What beating down of walls! What tearing up of monuments! What pulling down of seats! What wresting out of irons and brass from the windows! What defacing of arms! What demolishing of curious stonework! What tooting and piping upon organ pipes! And what a hideous triumph in the market-place before all the country, when all the mangled organ pipes, vestments, both copes and surplices, together with the leaden cross which had newly been sawn down from the Green-yard pulpit and the service-books and singing books that could be carried to the fire in the public market-place were heaped together.
Protestant Christianity was not uniformly hostile to the use of religious images. Martin Luther taught the "importance of images as tools for instruction and aids to devotion," stating: "If it is not a sin but good to have the image of Christ in my heart, why should it be a sin to have it in my eyes?" Lutheran churches retained ornate church interiors with a prominent crucifix, reflecting their high view of the real presence of Christ in Eucharist. As such, "Lutheran worship became a complex ritual choreography set in a richly furnished church interior." For Lutherans, "the Reformation renewed rather than removed the religious image."
Lutheran scholar Jeremiah Ohl writes:
Zwingli and others for the sake of saving the Word rejected all plastic art; Luther, with an equal concern for the Word, but far more conservative, would have all the arts to be the servants of the Gospel. "I am not of the opinion" said [Luther], "that through the Gospel all the arts should be banished and driven away, as some zealots want to make us believe; but I wish to see them all, especially music, in the service of Him Who gave and created them." Again he says: "I have myself heard those who oppose pictures, read from my German Bible.... But this contains many pictures of God, of the angels, of men, and of animals, especially in the Revelation of St. John, in the books of Moses, and in the book of Joshua. We therefore kindly beg these fanatics to permit us also to paint these pictures on the wall that they may be remembered and better understood, inasmuch as they can harm as little on the walls as in books. Would to God that I could persuade those who can afford it to paint the whole Bible on their houses, inside and outside, so that all might see; this would indeed be a Christian work. For I am convinced that it is God's will that we should hear and learn what He has done, especially what Christ suffered. But when I hear these things and meditate upon them, I find it impossible not to picture them in my heart. Whether I want to or not, when I hear, of Christ, a human form hanging upon a cross rises up in my heart: just as I see my natural face reflected when I look into water. Now if it is not sinful for me to have Christ's picture in my heart, why should it be sinful to have it before my eyes?
The Ottoman Sultan Suleiman the Magnificent, who had pragmatic reasons to support the Dutch Revolt (the rebels, like himself, were fighting against Spain) also completely approved of their act of "destroying idols," which accorded well with Muslim teachings.
A bit later in Dutch history, in 1627 the artist Johannes van der Beeck was arrested and tortured, charged with being a religious non-conformist and a blasphemer, heretic, atheist, and Satanist. The 25 January 1628 judgment from five noted advocates of The Hague pronounced him guilty of "blasphemy against God and avowed atheism, at the same time as leading a frightful and pernicious lifestyle. At the court's order his paintings were burned, and only a few of them survive."
From the 16th through the 19th centuries, many of the polytheistic religious deities and texts of pre-colonial Americas, Oceania, and Africa were destroyed by Christian missionaries and their converts, such as during the Spanish conquest of the Aztec Empire and the Spanish conquest of the Inca Empire.
In Japan during the early modern age, the spread of Catholicism also involved the repulsion of non-Christian religious structures, including Buddhist temples and Shinto shrines and figures. At times of conflict with rivals or some time after the conversion of several daimyos, Christian converts would often destroy Buddhist and Shinto religious structures.
Many of the moai of Easter Island were toppled during the 18th century in the iconoclasm of civil wars before any European encounter. Other instances of iconoclasm may have occurred throughout Eastern Polynesia during its conversion to Christianity in the 19th century.
After the Second Vatican Council in the late 20th century, some Roman Catholic parish churches discarded much of their traditional imagery, art, and architecture.
Islam has a much stronger tradition of forbidding the depiction of figures, especially religious figures, with Sunni Islam forbidding it more than Shia Islam. In the history of Islam, the act of removing idols from the Ka'ba in Mecca has great symbolic and historic importance for all believers.
In general, Muslim societies have avoided the depiction of living beings (both animals and humans) within such sacred spaces as mosques and madrasahs. This ban on figural representation is not based on the Qur'an, instead, it is based on traditions which are described within the Hadith. The prohibition of figuration has not always been extended to the secular sphere, and a robust tradition of figural representation exists within Muslim art. However, Western authors have tended to perceive "a long, culturally determined, and unchanging tradition of violent iconoclastic acts" within Islamic society.
The first act of Muslim iconoclasm dates to the beginning of Islam, in 630, when the various statues of Arabian deities housed in the Kaaba in Mecca were destroyed. There is a tradition that Muhammad spared a fresco of Mary and Jesus. This act was intended to bring an end to the idolatry which, in the Muslim view, characterized Jahiliyyah.
The destruction of the idols of Mecca did not, however, determine the treatment of other religious communities living under Muslim rule after the expansion of the caliphate. Most Christians under Muslim rule, for example, continued to produce icons and to decorate their churches as they wished. A major exception to this pattern of tolerance in early Islamic history was the "Edict of Yazīd", issued by the Umayyad caliph Yazīd II in 722–723. This edict ordered the destruction of crosses and Christian images within the territory of the caliphate. Researchers have discovered evidence that the order was followed, particularly in present-day Jordan, where archaeological evidence shows the removal of images from the mosaic floors of some, although not all, of the churches that stood at this time. But Yazīd's iconoclastic policies were not continued by his successors, and Christian communities of the Levant continued to make icons without significant interruption from the sixth century to the ninth.
Al-Maqrīzī, writing in the 15th century, attributes the missing nose on the Great Sphinx of Giza to iconoclasm by Muhammad Sa'im al-Dahr, a Sufi Muslim in the mid-1300s. He was reportedly outraged by local Muslims making offerings to the Great Sphinx in the hope of controlling the flood cycle, and he was later executed for vandalism. However, whether this was actually the cause of the missing nose has been debated by historians. Mark Lehner, having performed an archaeological study, concluded that it was broken with instruments at an earlier unknown time between the 3rd and 10th centuries.
Certain conquering Muslim armies have used local temples or houses of worship as mosques. An example is Hagia Sophia in Istanbul (formerly Constantinople), which was converted into a mosque in 1453. Most icons were desecrated and the rest were covered with plaster. In 1934 the government of Turkey decided to convert the Hagia Sophia into a museum and the restoration of the mosaics was undertaken by the American Byzantine Institute beginning in 1932.
Certain Muslim denominations continue to pursue iconoclastic agendas. There has been much controversy within Islam over the recent and apparently on-going destruction of historic sites by Saudi Arabian authorities, prompted by the fear they could become the subject of "idolatry."
A recent act of iconoclasm was the 2001 destruction of the giant Buddhas of Bamyan by the then-Taliban government of Afghanistan. The act generated worldwide protests and was not supported by other Muslim governments and organizations. It was widely perceived in the Western media as a result of the Muslim prohibition against figural decoration. Such an account overlooks "the coexistence between the Buddhas and the Muslim population that marveled at them for over a millennium" before their destruction. According to art historian F. B. Flood, analysis of the Taliban's statements regarding the Buddhas suggest that their destruction was motivated more by political than by theological concerns. Taliban spokesmen have given many different explanations of the motives for the destruction.
During the Tuareg rebellion of 2012, the radical Islamist militia Ansar Dine destroyed various Sufi shrines from the 15th and 16th centuries in the city of Timbuktu, Mali. In 2016, the International Criminal Court (ICC) sentenced Ahmad al-Faqi al-Mahdi, a former member of Ansar Dine, to nine years in prison for this destruction of cultural world heritage. This was the first time that the ICC convicted a person for such a crime.
The short-lived Islamic State of Iraq and the Levant carried out iconoclastic attacks such as the destruction of Shia mosques and shrines. Notable incidents include blowing up the Mosque of the Prophet Yunus (Jonah) and destroying the Shrine to Seth in Mosul.
In early Medieval India, there were numerous recorded instances of temple desecration by Indian kings against rival Indian kingdoms, which involved conflicts between devotees of different Hindu deities, as well as conflicts between Hindus, Buddhists, and Jains.
In the 8th century, Bengali troops from the Buddhist Pala Empire desecrated temples of Vishnu, the state deity of Lalitaditya's kingdom in Kashmir. In the early 9th century, Indian Hindu kings from Kanchipuram and the Pandyan king Srimara Srivallabha looted Buddhist temples in Sri Lanka. In the early 10th century, the Pratihara king Herambapala looted an image from a temple in the Sahi kingdom of Kangra, which was later looted by the Pratihara king Yashovarman.
Records from the campaign recorded in the Chach Nama record the destruction of temples during the early 8th century when the Umayyad governor of Damascus, al-Hajjaj ibn Yusuf, mobilized an expedition of 6000 cavalry under Muhammad bin Qasim in 712.
Historian Upendra Thakur records the persecution of Hindus and Buddhists:
Muhammad triumphantly marched into the country, conquering Debal, Sehwan, Nerun, Brahmanadabad, Alor and Multan one after the other in quick succession, and in less than a year and a half, the far-flung Hindu kingdom was crushed ... There was a fearful outbreak of religious bigotry in several places and temples were wantonly desecrated. At Debal, the Nairun and Aror temples were demolished and converted into mosques.
Perhaps the most notorious episode of iconoclasm in India was Mahmud of Ghazni's attack on the Somnath Temple from across the Thar Desert. The temple was first raided in 725, when Junayad, the governor of Sind, sent his armies to destroy it. In 1024, during the reign of Bhima I, the prominent Turkic-Muslim ruler Mahmud of Ghazni raided Gujarat, plundering the Somnath Temple and breaking its jyotirlinga despite pleas by Brahmins not to break it. He took away a booty of 20 million dinars. The attack may have been inspired by the belief that an idol of the goddess Manat had been secretly transferred to the temple. According to the Ghaznavid court-poet Farrukhi Sistani, who claimed to have accompanied Mahmud on his raid, Somnat (as rendered in Persian) was a garbled version of su-manat referring to the goddess Manat. According to him, as well as a later Ghaznavid historian Abu Sa'id Gardezi, the images of the other goddesses were destroyed in Arabia but the one of Manat was secretly sent away to Kathiawar (in modern Gujarat) for safekeeping. Since the idol of Manat was an aniconic image of black stone, it could have been easily confused with a lingam at Somnath. Mahmud is said to have broken the idol and taken away parts of it as loot and placed so that people would walk on it. In his letters to the Caliphate, Mahmud exaggerated the size, wealth and religious significance of the Somnath temple, receiving grandiose titles from the Caliph in return.
The wooden structure was replaced by Kumarapala (r. 1143–72), who rebuilt the temple out of stone.
Historical records which were compiled by the Muslim historian Maulana Hakim Saiyid Abdul Hai attest to the religious violence which occurred during the Mamluk dynasty under Qutb-ud-din Aybak. The first mosque built in Delhi, the "Quwwat al-Islam" was built with demolished parts of 20 Hindu and Jain temples. This pattern of iconoclasm was common during his reign.
During the Delhi Sultanate, a Muslim army led by Malik Kafur, a general of Alauddin Khalji, pursued four violent campaigns into south India, between 1309 and 1311, against the Hindu kingdoms of Devgiri (Maharashtra), Warangal (Telangana), Dwarasamudra (Karnataka) and Madurai (Tamil Nadu). Many Temples were plundered; Hoysaleswara Temple and others were ruthlessly destroyed.
In Kashmir, Sikandar Shah Miri (1389–1413) began expanding, and unleashed religious violence that earned him the name but-shikan, or 'idol-breaker'. He earned this sobriquet because of the sheer scale of desecration and destruction of Hindu and Buddhist temples, shrines, ashrams, hermitages, and other holy places in what is now known as Kashmir and its neighboring territories. Firishta states, "After the emigration of the Brahmins, Sikundur ordered all the temples in Kashmeer to be thrown down." He destroyed vast majority of Hindu and Buddhist temples in his reach in Kashmir region (north and northwest India).
In the 1460s, Kapilendra, founder of the Suryavamsi Gajapati dynasty, sacked the Shaiva and Vaishnava temples in the Cauvery delta in the course of wars of conquest in the Tamil country. Vijayanagara king Krishnadevaraya looted a Bala Krishna temple in Udayagiri in 1514, and looted a Vitthala temple in Pandharpur in 1520.
A regional tradition, along with the Hindu text Madala Panji, states that Kalapahar attacked and damaged the Konark Sun Temple in 1568, as well as many others in Orissa.
Some of the most dramatic cases of iconoclasm by Muslims are found in parts of India where Hindu and Buddhist temples were razed and mosques erected in their place. Aurangzeb, the 6th Mughal Emperor, destroyed the famous Hindu temples at Varanasi and Mathura, turning back on his ancestor Akbar's policy of religious freedom and establishing Sharia across his empire.
Exact data on the nature and number of Hindu temples destroyed by the Christian missionaries and Portuguese government are unavailable. Some 160 temples were allegedly razed to the ground in Tiswadi (Ilhas de Goa) by 1566. Between 1566 and 1567, a campaign by Franciscan missionaries destroyed another 300 Hindu temples in Bardez (North Goa). In Salcete (South Goa), approximately another 300 Hindu temples were destroyed by the Christian officials of the Inquisition. Numerous Hindu temples were destroyed elsewhere at Assolna and Cuncolim by Portuguese authorities. A 1569 royal letter in Portuguese archives records that all Hindu temples in its colonies in India had been burnt and razed to the ground. The English traveller Sir Thomas Herbert, 1st Baronet who visited Goa in the 1600s writes:
... as also the ruins of 200 Idol Temples which the Vice-Roy Antonio Norogna totally demolisht, that no memory might remain, or monuments continue, of such gross Idolatry. For not only there, but at Salsette also were two Temples or places of prophane Worship; one of them (by incredible toil cut out of the hard Rock) was divided into three Iles or Galleries, in which were figured many of their deformed Pagotha's, and of which an Indian (if to be credited) reports that there were in that Temple 300 of those narrow Galleries, and the Idols so exceeding ugly as would affright an European Spectator; nevertheless this was a celebrated place, and so abundantly frequented by Idolaters, as induced the Portuguise in zeal with a considerable force to master the Town and to demolish the Temples, breaking in pieces all that monstrous brood of mishapen Pagods. In Goa nothing is more observable now than the fortifications, the Vice-Roy and Arch-bishops Palaces, and the Churches. ...
Dr. Ambedkar and his supporters on 25 December 1927 in the Mahad Satyagraha strongly criticised, condemned and then burned copies of Manusmriti on a pyre in a specially dug pit. Manusmriti, one of the sacred Hindu texts, is the religious basis of casteist laws and values of Hinduism and hence was/is the reason of social and economic plight of crores of untouchables and lower caste Hindus. One of the greatest iconoclasts for all time, this explosive incident rocked the Hindu society. Ambedkarites continue to observe 25 December as "Manusmriti Dahan Divas" (Manusmriti Burning Day) and burn copies of Manusmriti on this day.
The most high-profile case of Independent India was in 1992. Hindu mob, led by the Vishva Hindu Parishad and Bajrang Dal, destroyed the 430-year-old Islamic Babri Masjid in Ayodhya which is claimed to be built after destroying the Ram Mandir.
There have been a number of anti-Buddhist campaigns in Chinese history that led to the destruction of Buddhist temples and images. One of the most notable of these campaigns was the Great Anti-Buddhist Persecution of the Tang dynasty.
During and after the 1911 Xinhai Revolution, there was widespread destruction of religious and secular images in China.
During the Northern Expedition in Guangxi in 1926, Kuomintang General Bai Chongxi led his troops in destroying Buddhist temples and smashing Buddhist images, turning the temples into schools and Kuomintang party headquarters. It was reported that almost all of the viharas in Guangxi were destroyed and the monks were removed. Bai also led a wave of anti-foreignism in Guangxi, attacking Americans, Europeans, and other foreigners, and generally making the province unsafe for foreigners and missionaries. Westerners fled from the province and some Chinese Christians were also attacked as imperialist agents. The three goals of the movement were anti-foreignism, anti-imperialism and anti-religion. Bai led the anti-religious movement against superstition. Huang Shaohong, also a Kuomintang member of the New Guangxi clique, supported Bai's campaign. The anti-religious campaign was agreed upon by all Guangxi Kuomintang members.
There was extensive destruction of religious and secular imagery in Tibet after it was invaded and occupied by China.
Many religious and secular images were destroyed during the Cultural Revolution of 1966–1976, ostensibly because they were a holdover from China's traditional past (which the Communist regime led by Mao Zedong reviled). The Cultural Revolution included widespread destruction of historic artworks in public places and private collections, whether religious or secular. Objects in state museums were mostly left intact.
According to an article in Buddhist-Christian Studies:
Over the course of the last decade [1990s] a fairly large number of Buddhist temples in South Korea have been destroyed or damaged by fire by Christian fundamentalists. More recently, Buddhist statues have been identified as idols, and attacked and decapitated in the name of Jesus. Arrests are hard to effect, as the arsonists and vandals work by stealth of night.
Beginning c. 1243 AD with the death of Indravarman II, the Khmer Empire went through a period of iconoclasm. At the beginning of the reign of the next king, Jayavarman VIII, the Kingdom went back to Hinduism and the worship of Shiva. Many of the Buddhist images were destroyed by Jayavarman VIII, who reestablished previously Hindu shrines that had been converted to Buddhism by his predecessor. Carvings of the Buddha at temples such as Preah Khan were destroyed, and during this period the Bayon Temple was made a temple to Shiva, with the central 3.6 meter tall statue of the Buddha cast to the bottom of a nearby well.
Revolutions and changes of regime, whether through uprising of the local population, foreign invasion, or a combination of both, are often accompanied by the public destruction of statues and monuments identified with the previous regime. This may also be known as damnatio memoriae, the ancient Roman practice of official obliteration of the memory of a specific individual. Stricter definitions of "iconoclasm" exclude both types of action, reserving the term for religious or more widely cultural destruction. In many cases, such as Revolutionary Russia or Ancient Egypt, this distinction can be hard to make.
Among Roman emperors and other political figures subject to decrees of damnatio memoriae were Sejanus, Publius Septimius Geta, and Domitian. Several Emperors, such as Domitian and Commodus had during their reigns erected numerous statues of themselves, which were pulled down and destroyed when they were overthrown.
The perception of damnatio memoriae in the Classical world was an act of erasing memory has been challenged by scholars who have argued that it "did not negate historical traces, but created gestures which served to dishonor the record of the person and so, in an oblique way, to confirm memory," and was in effect a spectacular display of "pantomime forgetfulness." Examining cases of political monument destruction in modern Irish history, Guy Beiner has demonstrated that iconoclastic vandalism often entails subtle expressions of ambiguous remembrance and that, rather than effacing memory, such acts of de-commemorating effectively preserve memory in obscure forms.
Throughout the radical phase of the French Revolution, iconoclasm was supported by members of the government as well as the citizenry. Numerous monuments, religious works, and other historically significant pieces were destroyed in an attempt to eradicate any memory of the Old Regime. A statue of King Louis XV in the Paris square which until then bore his name, was pulled down and destroyed. This was a prelude to the guillotining of his successor Louis XVI in the same site, renamed "Place de la Révolution" (at present Place de la Concorde). Later that year, the bodies of many French kings were exhumed from the Basilica of Saint-Denis and dumped in a mass grave.
Some episodes of iconoclasm were carried out spontaneously by crowds of citizens, including the destruction of statues of kings during the insurrection of 10 August 1792 in Paris. Some were directly sanctioned by the Republican government, including the Saint-Denis exhumations. Nonetheless, the Republican government also took steps to preserve historic artworks, notably by founding the Louvre museum to house and display the former royal art collection. This allowed the physical objects and national heritage to be preserved while stripping them of their association with the monarchy. Alexandre Lenoir saved many royal monuments by diverting them to preservation in a museum.
The statue of Napoleon on the column at Place Vendôme, Paris was also the target of iconoclasm several times: destroyed after the Bourbon Restoration, restored by Louis-Philippe, destroyed during the Paris Commune and restored by Adolphe Thiers.
After Napoleon conquered the Italian city of Pavia, local Pavia Jacobins destroyed the Regisole, a bronze classical equestrian monument dating back to Classical times. The Jacobins considered it a symbol of Royal authority, but it had been a prominent Pavia landmark for nearly a thousand years and its destruction aroused much indignation and precipitated a revolt by inhabitants of Pavia against the French, which was quelled by Napoleon after a furious urban fight.
Other examples of political destruction of images include:
During and after the October Revolution, widespread destruction of religious and secular imagery in Russia took place, as well as the destruction of imagery related to the Imperial family. The Revolution was accompanied by destruction of monuments of tsars, as well as the destruction of imperial eagles at various locations throughout Russia. According to Christopher Wharton:
In front of a Moscow Cathedral, crowds cheered as the enormous statue of Tsar Alexander III was bound with ropes and gradually beaten to the ground. After a considerable amount of time, the statue was decapitated and its remaining parts were broken into rubble.
The Soviet Union actively destroyed religious sites, including Russian Orthodox churches and Jewish cemeteries, in order to discourage religious practice and curb the activities of religious groups.
During the Hungarian Revolution of 1956 and during the Revolutions of 1989, protesters often attacked and took down sculptures and images of Joseph Stalin, such as the Stalin Monument in Budapest.
The fall of Communism in 1989–1991 was also followed by the destruction or removal of statues of Vladimir Lenin and other Communist leaders in the former Soviet Union and in other Eastern Bloc countries. Particularly well-known was the destruction of "Iron Felix", the statue of Felix Dzerzhinsky outside the KGB's headquarters. Another statue of Dzerzhinsky was destroyed in a Warsaw square that was named after him during communist rule, but which is now called Bank Square.
During the American Revolution, the Sons of Liberty pulled down and destroyed the gilded lead statue of George III of the United Kingdom on Bowling Green (New York City), melting it down to be recast as ammunition. Similar acts have accompanied the independence of most ex-colonial territories. Sometimes relatively intact monuments are moved to a collected display in a less prominent place, as in India and also post-Communist countries.
In August 2017, a statue of a Confederate soldier dedicated to "the boys who wore the gray" was pulled down from its pedestal in front of Durham County Courthouse in North Carolina by protesters. This followed the events at the 2017 Unite the Right rally in response to growing calls to remove Confederate monuments and memorials across the U.S.
During the George Floyd protests of 2020, demonstrators pulled down dozens of statues which they considered symbols of the Confederacy, slavery, segregation, or racism, including the statue of Williams Carter Wickham in Richmond, Virginia.
Further demonstrations in the wake of the George Floyd protests have resulted in the removal of:
Multiple statues of early European explorers and founders were also vandalized, including those of Christopher Columbus, George Washington, and Thomas Jefferson.
A statue of the African-American abolitionist statesman Frederick Douglass was vandalised in Rochester, New York, by being torn from its base and left close to a nearby river gorge. Donald Trump attributed the act to anarchists, but he did not substantiate his claim nor did he offer a theory on motive. Cornell William Brooks, former president of the NAACP, theorised that this was an act of revenge from white supremacists. Carvin Eison, who led the project that brought the Douglass statues to Rochester, thought it was unlikely that the Douglass statue was toppled by someone who was upset about monuments honoring Confederate figures, and added that "it's only logical that it was some kind of retaliation event in someone's mind". Police did not find evidence that supported or refuted either claim, and the vandalism case remains unsolved. | [
{
"paragraph_id": 0,
"text": "Iconoclasm (from Greek: εἰκών, eikṓn, 'figure, icon' + κλάω, kláō, 'to break') is the social belief in the importance of the destruction of icons and other images or monuments, most frequently for religious or political reasons. People who engage in or support iconoclasm are called iconoclasts, a term that has come to be figuratively applied to any individual who challenges \"cherished beliefs or venerated institutions on the grounds that they are erroneous or pernicious.\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "Conversely, one who reveres or venerates religious images is called (by iconoclasts) an iconolater; in a Byzantine context, such a person is called an iconodule or iconophile. Iconoclasm does not generally encompass the destruction of the images of a specific ruler after his or her death or overthrow, a practice better known as damnatio memoriae.",
"title": ""
},
{
"paragraph_id": 2,
"text": "While iconoclasm may be carried out by adherents of a different religion, it is more commonly the result of sectarian disputes between factions of the same religion. The term originates from the Byzantine Iconoclasm, the struggles between proponents and opponents of religious icons in the Byzantine Empire from 726 to 842 AD. Degrees of iconoclasm vary greatly among religions and their branches, but are strongest in religions which oppose idolatry, including the Abrahamic religions. Outside of the religious context, iconoclasm can refer to movements for widespread destruction in symbols of an ideology or cause, such as the destruction of monarchist symbols during the French Revolution.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the Bronze Age, the most significant episode of iconoclasm occurred in Egypt during the Amarna Period, when Akhenaten, based in his new capital of Akhetaten, instituted a significant shift in Egyptian artistic styles alongside a campaign of intolerance towards the traditional gods and a new emphasis on a state monolatristic tradition focused on the god Aten, the Sun disk—many temples and monuments were destroyed as a result:",
"title": "Early religious iconoclasm"
},
{
"paragraph_id": 4,
"text": "In rebellion against the old religion and the powerful priests of Amun, Akhenaten ordered the eradication of all of Egypt's traditional gods. He sent royal officials to chisel out and destroy every reference to Amun and the names of other deities on tombs, temple walls, and cartouches to instill in the people that the Aten was the one true god.",
"title": "Early religious iconoclasm"
},
{
"paragraph_id": 5,
"text": "Public references to Akhenaten were destroyed soon after his death. Comparing the ancient Egyptians with the Israelites, Jan Assmann writes:",
"title": "Early religious iconoclasm"
},
{
"paragraph_id": 6,
"text": "For Egypt, the greatest horror was the destruction or abduction of the cult images. In the eyes of the Israelites, the erection of images meant the destruction of divine presence; in the eyes of the Egyptians, this same effect was attained by the destruction of images. In Egypt, iconoclasm was the most terrible religious crime; in Israel, the most terrible religious crime was idolatry. In this respect Osarseph alias Akhenaten, the iconoclast, and the Golden Calf, the paragon of idolatry, correspond to each other inversely, and it is strange that Aaron could so easily avoid the role of the religious criminal. It is more than probable that these traditions evolved under mutual influence. In this respect, Moses and Akhenaten became, after all, closely related.",
"title": "Early religious iconoclasm"
},
{
"paragraph_id": 7,
"text": "According to the Hebrew Bible, God instructed the Israelites to \"destroy all [the] engraved stones, destroy all [the] molded images, and demolish all [the] high places\" of the indigenous Canaanite population as soon as they entered the Promised Land.",
"title": "Early religious iconoclasm"
},
{
"paragraph_id": 8,
"text": "In Judaism, King Hezekiah purged Solomon's Temple in Jerusalem and all figures were also destroyed in the Land of Israel, including the Nehushtan, as recorded in the Second Book of Kings. His reforms were reversed in the reign of his son Manasseh.",
"title": "Early religious iconoclasm"
},
{
"paragraph_id": 9,
"text": "Scattered expressions of opposition to the use of images have been reported: the Synod of Elvira appeared to endorse iconoclasm; Canon 36 states, \"Pictures are not to be placed in churches, so that they do not become objects of worship and adoration.\" A possible translation is also: \"There shall be no pictures in the church, lest what is worshipped and adored should be depicted on the walls.\" The date of this canon is disputed. Proscription ceased after the destruction of pagan temples. However, widespread use of Christian iconography only began as Christianity increasingly spread among Gentiles after the legalization of Christianity by Roman Emperor Constantine (c. 312 AD). During the process of Christianisation under Constantine, Christian groups destroyed the images and sculptures expressive of the Roman Empire's polytheist state religion.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 10,
"text": "Among early church theologians, iconoclastic tendencies were supported by theologians such as: Tertullian, Clement of Alexandria, Origen, Lactantius, Justin Martyr, Eusebius and Epiphanius.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 11,
"text": "The period after the reign of Byzantine Emperor Justinian (527–565) evidently saw a huge increase in the use of images, both in volume and quality, and a gathering aniconic reaction.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 12,
"text": "One notable change within the Byzantine Empire came in 695, when Justinian II's government added a full-face image of Christ on the obverse of imperial gold coins. The change caused the Caliph Abd al-Malik to stop his earlier adoption of Byzantine coin types. He started a purely Islamic coinage with lettering only. A letter by the Patriarch Germanus, written before 726 to two iconoclast bishops, says that \"now whole towns and multitudes of people are in considerable agitation over this matter,\" but there is little written evidence of the debate.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 13,
"text": "Government-led iconoclasm began with Byzantine Emperor Leo III, who issued a series of edicts between 726 and 730 against the veneration of images. The religious conflict created political and economic divisions in Byzantine society; iconoclasm was generally supported by the Eastern, poorer, non-Greek peoples of the Empire who had to frequently deal with raids from the new Muslim Empire. On the other hand, the wealthier Greeks of Constantinople and the peoples of the Balkan and Italian provinces strongly opposed iconoclasm.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 14,
"text": "Peter of Bruys opposed the usage of religious images, the Strigolniki were also possibly iconoclastic. Claudius of Turin was the bishop of Turin from 817 until his death. He is most noted for teaching iconoclasm.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 15,
"text": "The first iconoclastic wave happened in Wittenberg in the early 1520s under reformers Thomas Müntzer and Andreas Karlstadt, in the absence of Martin Luther, who then, concealed under the pen-name of 'Junker Jörg', intervened to calm things down. Luther argued that the mental picturing of Christ when reading the Scriptures was similar in character to artistic renderings of Christ.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 16,
"text": "In contrast to the Lutherans who favoured certain types of sacred art in their churches and homes, the Reformed (Calvinist) leaders, in particular Andreas Karlstadt, Huldrych Zwingli and John Calvin, encouraged the removal of religious images by invoking the Decalogue's prohibition of idolatry and the manufacture of graven (sculpted) images of God. As a result, individuals attacked statues and images, most famously in the beeldenstorm across the Low Countries in 1566. However, in most cases, civil authorities removed images in an orderly manner in the newly Reformed Protestant cities and territories of Europe.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 17,
"text": "The belief of iconoclasm caused havoc throughout Europe. In 1523, specifically due to the Swiss reformer Huldrych Zwingli, a vast number of his followers viewed themselves as being involved in a spiritual community that in matters of faith should obey neither the visible Church nor lay authorities. According to Peter George Wallace \"Zwingli's attack on images, at the first debate, triggered iconoclastic incidents in Zurich and the villages under civic jurisdiction that the reformer was unwilling to condone.\" Due to this action of protest against authority, \"Zwingli responded with a carefully reasoned treatise that men could not live in society without laws and constraint\".",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 18,
"text": "Significant iconoclastic riots took place in Basel (in 1529), Zurich (1523), Copenhagen (1530), Münster (1534), Geneva (1535), Augsburg (1537), Scotland (1559), Rouen (1560), and Saintes and La Rochelle (1562). Calvinist iconoclasm in Europe \"provoked reactive riots by Lutheran mobs\" in Germany and \"antagonized the neighbouring Eastern Orthodox\" in the Baltic region.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 19,
"text": "The Seventeen Provinces (now the Netherlands, Belgium, and parts of Northern France) were disrupted by widespread Calvinist iconoclasm in the summer of 1566. This period, known as the Beeldenstorm, began with the destruction of the statuary of the Monastery of Saint Lawrence in Steenvoorde after a \"Hagenpreek,\" or field sermon, by Sebastiaan Matte on 10 August 1566; by October the wave of furor had gone all through the Spanish Netherlands up to Groningen. Hundreds of other attacks included the sacking of the Monastery of Saint Anthony after a sermon by Jacob de Buysere. The Beeldenstorm marked the start of the revolution against the Spanish forces and the Catholic Church.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 20,
"text": "During the Reformation in England, which started during the reign of Anglican monarch Henry VIII, and was urged on by reformers such as Hugh Latimer and Thomas Cranmer, limited official action was taken against religious images in churches in the late 1530s. Henry's young son, Edward VI, came to the throne in 1547 and, under Cranmer's guidance, issued injunctions for Religious Reforms in the same year and in 1550, an Act of Parliament \"for the abolition and putting away of divers books and images.\"",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 21,
"text": "During the English Civil War, the Parliamentarians reorganised the administration of East Anglia into the Eastern Association of counties. This covered some of the wealthiest counties in England, which in turn financed a substantial and significant military force. After Earl of Manchester was appointed the commanding officer of these forces, and in turn he appointed Smasher Dowsing as Provost Marshal, with a warrant to demolish religious images which were considered to be superstitious or linked with popism. Bishop Joseph Hall of Norwich described the events of 1643 when troops and citizens, encouraged by a Parliamentary ordinance against superstition and idolatry, behaved thus:",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 22,
"text": "Lord what work was here! What clattering of glasses! What beating down of walls! What tearing up of monuments! What pulling down of seats! What wresting out of irons and brass from the windows! What defacing of arms! What demolishing of curious stonework! What tooting and piping upon organ pipes! And what a hideous triumph in the market-place before all the country, when all the mangled organ pipes, vestments, both copes and surplices, together with the leaden cross which had newly been sawn down from the Green-yard pulpit and the service-books and singing books that could be carried to the fire in the public market-place were heaped together.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 23,
"text": "Protestant Christianity was not uniformly hostile to the use of religious images. Martin Luther taught the \"importance of images as tools for instruction and aids to devotion,\" stating: \"If it is not a sin but good to have the image of Christ in my heart, why should it be a sin to have it in my eyes?\" Lutheran churches retained ornate church interiors with a prominent crucifix, reflecting their high view of the real presence of Christ in Eucharist. As such, \"Lutheran worship became a complex ritual choreography set in a richly furnished church interior.\" For Lutherans, \"the Reformation renewed rather than removed the religious image.\"",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 24,
"text": "Lutheran scholar Jeremiah Ohl writes:",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 25,
"text": "Zwingli and others for the sake of saving the Word rejected all plastic art; Luther, with an equal concern for the Word, but far more conservative, would have all the arts to be the servants of the Gospel. \"I am not of the opinion\" said [Luther], \"that through the Gospel all the arts should be banished and driven away, as some zealots want to make us believe; but I wish to see them all, especially music, in the service of Him Who gave and created them.\" Again he says: \"I have myself heard those who oppose pictures, read from my German Bible.... But this contains many pictures of God, of the angels, of men, and of animals, especially in the Revelation of St. John, in the books of Moses, and in the book of Joshua. We therefore kindly beg these fanatics to permit us also to paint these pictures on the wall that they may be remembered and better understood, inasmuch as they can harm as little on the walls as in books. Would to God that I could persuade those who can afford it to paint the whole Bible on their houses, inside and outside, so that all might see; this would indeed be a Christian work. For I am convinced that it is God's will that we should hear and learn what He has done, especially what Christ suffered. But when I hear these things and meditate upon them, I find it impossible not to picture them in my heart. Whether I want to or not, when I hear, of Christ, a human form hanging upon a cross rises up in my heart: just as I see my natural face reflected when I look into water. Now if it is not sinful for me to have Christ's picture in my heart, why should it be sinful to have it before my eyes?",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 26,
"text": "The Ottoman Sultan Suleiman the Magnificent, who had pragmatic reasons to support the Dutch Revolt (the rebels, like himself, were fighting against Spain) also completely approved of their act of \"destroying idols,\" which accorded well with Muslim teachings.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 27,
"text": "A bit later in Dutch history, in 1627 the artist Johannes van der Beeck was arrested and tortured, charged with being a religious non-conformist and a blasphemer, heretic, atheist, and Satanist. The 25 January 1628 judgment from five noted advocates of The Hague pronounced him guilty of \"blasphemy against God and avowed atheism, at the same time as leading a frightful and pernicious lifestyle. At the court's order his paintings were burned, and only a few of them survive.\"",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 28,
"text": "From the 16th through the 19th centuries, many of the polytheistic religious deities and texts of pre-colonial Americas, Oceania, and Africa were destroyed by Christian missionaries and their converts, such as during the Spanish conquest of the Aztec Empire and the Spanish conquest of the Inca Empire.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 29,
"text": "In Japan during the early modern age, the spread of Catholicism also involved the repulsion of non-Christian religious structures, including Buddhist temples and Shinto shrines and figures. At times of conflict with rivals or some time after the conversion of several daimyos, Christian converts would often destroy Buddhist and Shinto religious structures.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 30,
"text": "Many of the moai of Easter Island were toppled during the 18th century in the iconoclasm of civil wars before any European encounter. Other instances of iconoclasm may have occurred throughout Eastern Polynesia during its conversion to Christianity in the 19th century.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 31,
"text": "After the Second Vatican Council in the late 20th century, some Roman Catholic parish churches discarded much of their traditional imagery, art, and architecture.",
"title": "Iconoclasm in Christian history"
},
{
"paragraph_id": 32,
"text": "Islam has a much stronger tradition of forbidding the depiction of figures, especially religious figures, with Sunni Islam forbidding it more than Shia Islam. In the history of Islam, the act of removing idols from the Ka'ba in Mecca has great symbolic and historic importance for all believers.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 33,
"text": "In general, Muslim societies have avoided the depiction of living beings (both animals and humans) within such sacred spaces as mosques and madrasahs. This ban on figural representation is not based on the Qur'an, instead, it is based on traditions which are described within the Hadith. The prohibition of figuration has not always been extended to the secular sphere, and a robust tradition of figural representation exists within Muslim art. However, Western authors have tended to perceive \"a long, culturally determined, and unchanging tradition of violent iconoclastic acts\" within Islamic society.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 34,
"text": "The first act of Muslim iconoclasm dates to the beginning of Islam, in 630, when the various statues of Arabian deities housed in the Kaaba in Mecca were destroyed. There is a tradition that Muhammad spared a fresco of Mary and Jesus. This act was intended to bring an end to the idolatry which, in the Muslim view, characterized Jahiliyyah.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 35,
"text": "The destruction of the idols of Mecca did not, however, determine the treatment of other religious communities living under Muslim rule after the expansion of the caliphate. Most Christians under Muslim rule, for example, continued to produce icons and to decorate their churches as they wished. A major exception to this pattern of tolerance in early Islamic history was the \"Edict of Yazīd\", issued by the Umayyad caliph Yazīd II in 722–723. This edict ordered the destruction of crosses and Christian images within the territory of the caliphate. Researchers have discovered evidence that the order was followed, particularly in present-day Jordan, where archaeological evidence shows the removal of images from the mosaic floors of some, although not all, of the churches that stood at this time. But Yazīd's iconoclastic policies were not continued by his successors, and Christian communities of the Levant continued to make icons without significant interruption from the sixth century to the ninth.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 36,
"text": "Al-Maqrīzī, writing in the 15th century, attributes the missing nose on the Great Sphinx of Giza to iconoclasm by Muhammad Sa'im al-Dahr, a Sufi Muslim in the mid-1300s. He was reportedly outraged by local Muslims making offerings to the Great Sphinx in the hope of controlling the flood cycle, and he was later executed for vandalism. However, whether this was actually the cause of the missing nose has been debated by historians. Mark Lehner, having performed an archaeological study, concluded that it was broken with instruments at an earlier unknown time between the 3rd and 10th centuries.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 37,
"text": "Certain conquering Muslim armies have used local temples or houses of worship as mosques. An example is Hagia Sophia in Istanbul (formerly Constantinople), which was converted into a mosque in 1453. Most icons were desecrated and the rest were covered with plaster. In 1934 the government of Turkey decided to convert the Hagia Sophia into a museum and the restoration of the mosaics was undertaken by the American Byzantine Institute beginning in 1932.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 38,
"text": "Certain Muslim denominations continue to pursue iconoclastic agendas. There has been much controversy within Islam over the recent and apparently on-going destruction of historic sites by Saudi Arabian authorities, prompted by the fear they could become the subject of \"idolatry.\"",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 39,
"text": "A recent act of iconoclasm was the 2001 destruction of the giant Buddhas of Bamyan by the then-Taliban government of Afghanistan. The act generated worldwide protests and was not supported by other Muslim governments and organizations. It was widely perceived in the Western media as a result of the Muslim prohibition against figural decoration. Such an account overlooks \"the coexistence between the Buddhas and the Muslim population that marveled at them for over a millennium\" before their destruction. According to art historian F. B. Flood, analysis of the Taliban's statements regarding the Buddhas suggest that their destruction was motivated more by political than by theological concerns. Taliban spokesmen have given many different explanations of the motives for the destruction.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 40,
"text": "During the Tuareg rebellion of 2012, the radical Islamist militia Ansar Dine destroyed various Sufi shrines from the 15th and 16th centuries in the city of Timbuktu, Mali. In 2016, the International Criminal Court (ICC) sentenced Ahmad al-Faqi al-Mahdi, a former member of Ansar Dine, to nine years in prison for this destruction of cultural world heritage. This was the first time that the ICC convicted a person for such a crime.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 41,
"text": "The short-lived Islamic State of Iraq and the Levant carried out iconoclastic attacks such as the destruction of Shia mosques and shrines. Notable incidents include blowing up the Mosque of the Prophet Yunus (Jonah) and destroying the Shrine to Seth in Mosul.",
"title": "Muslim iconoclasm"
},
{
"paragraph_id": 42,
"text": "In early Medieval India, there were numerous recorded instances of temple desecration by Indian kings against rival Indian kingdoms, which involved conflicts between devotees of different Hindu deities, as well as conflicts between Hindus, Buddhists, and Jains.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 43,
"text": "In the 8th century, Bengali troops from the Buddhist Pala Empire desecrated temples of Vishnu, the state deity of Lalitaditya's kingdom in Kashmir. In the early 9th century, Indian Hindu kings from Kanchipuram and the Pandyan king Srimara Srivallabha looted Buddhist temples in Sri Lanka. In the early 10th century, the Pratihara king Herambapala looted an image from a temple in the Sahi kingdom of Kangra, which was later looted by the Pratihara king Yashovarman.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 44,
"text": "Records from the campaign recorded in the Chach Nama record the destruction of temples during the early 8th century when the Umayyad governor of Damascus, al-Hajjaj ibn Yusuf, mobilized an expedition of 6000 cavalry under Muhammad bin Qasim in 712.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 45,
"text": "Historian Upendra Thakur records the persecution of Hindus and Buddhists:",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 46,
"text": "Muhammad triumphantly marched into the country, conquering Debal, Sehwan, Nerun, Brahmanadabad, Alor and Multan one after the other in quick succession, and in less than a year and a half, the far-flung Hindu kingdom was crushed ... There was a fearful outbreak of religious bigotry in several places and temples were wantonly desecrated. At Debal, the Nairun and Aror temples were demolished and converted into mosques.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 47,
"text": "Perhaps the most notorious episode of iconoclasm in India was Mahmud of Ghazni's attack on the Somnath Temple from across the Thar Desert. The temple was first raided in 725, when Junayad, the governor of Sind, sent his armies to destroy it. In 1024, during the reign of Bhima I, the prominent Turkic-Muslim ruler Mahmud of Ghazni raided Gujarat, plundering the Somnath Temple and breaking its jyotirlinga despite pleas by Brahmins not to break it. He took away a booty of 20 million dinars. The attack may have been inspired by the belief that an idol of the goddess Manat had been secretly transferred to the temple. According to the Ghaznavid court-poet Farrukhi Sistani, who claimed to have accompanied Mahmud on his raid, Somnat (as rendered in Persian) was a garbled version of su-manat referring to the goddess Manat. According to him, as well as a later Ghaznavid historian Abu Sa'id Gardezi, the images of the other goddesses were destroyed in Arabia but the one of Manat was secretly sent away to Kathiawar (in modern Gujarat) for safekeeping. Since the idol of Manat was an aniconic image of black stone, it could have been easily confused with a lingam at Somnath. Mahmud is said to have broken the idol and taken away parts of it as loot and placed so that people would walk on it. In his letters to the Caliphate, Mahmud exaggerated the size, wealth and religious significance of the Somnath temple, receiving grandiose titles from the Caliph in return.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 48,
"text": "The wooden structure was replaced by Kumarapala (r. 1143–72), who rebuilt the temple out of stone.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 49,
"text": "Historical records which were compiled by the Muslim historian Maulana Hakim Saiyid Abdul Hai attest to the religious violence which occurred during the Mamluk dynasty under Qutb-ud-din Aybak. The first mosque built in Delhi, the \"Quwwat al-Islam\" was built with demolished parts of 20 Hindu and Jain temples. This pattern of iconoclasm was common during his reign.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 50,
"text": "During the Delhi Sultanate, a Muslim army led by Malik Kafur, a general of Alauddin Khalji, pursued four violent campaigns into south India, between 1309 and 1311, against the Hindu kingdoms of Devgiri (Maharashtra), Warangal (Telangana), Dwarasamudra (Karnataka) and Madurai (Tamil Nadu). Many Temples were plundered; Hoysaleswara Temple and others were ruthlessly destroyed.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 51,
"text": "In Kashmir, Sikandar Shah Miri (1389–1413) began expanding, and unleashed religious violence that earned him the name but-shikan, or 'idol-breaker'. He earned this sobriquet because of the sheer scale of desecration and destruction of Hindu and Buddhist temples, shrines, ashrams, hermitages, and other holy places in what is now known as Kashmir and its neighboring territories. Firishta states, \"After the emigration of the Brahmins, Sikundur ordered all the temples in Kashmeer to be thrown down.\" He destroyed vast majority of Hindu and Buddhist temples in his reach in Kashmir region (north and northwest India).",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 52,
"text": "In the 1460s, Kapilendra, founder of the Suryavamsi Gajapati dynasty, sacked the Shaiva and Vaishnava temples in the Cauvery delta in the course of wars of conquest in the Tamil country. Vijayanagara king Krishnadevaraya looted a Bala Krishna temple in Udayagiri in 1514, and looted a Vitthala temple in Pandharpur in 1520.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 53,
"text": "A regional tradition, along with the Hindu text Madala Panji, states that Kalapahar attacked and damaged the Konark Sun Temple in 1568, as well as many others in Orissa.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 54,
"text": "Some of the most dramatic cases of iconoclasm by Muslims are found in parts of India where Hindu and Buddhist temples were razed and mosques erected in their place. Aurangzeb, the 6th Mughal Emperor, destroyed the famous Hindu temples at Varanasi and Mathura, turning back on his ancestor Akbar's policy of religious freedom and establishing Sharia across his empire.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 55,
"text": "Exact data on the nature and number of Hindu temples destroyed by the Christian missionaries and Portuguese government are unavailable. Some 160 temples were allegedly razed to the ground in Tiswadi (Ilhas de Goa) by 1566. Between 1566 and 1567, a campaign by Franciscan missionaries destroyed another 300 Hindu temples in Bardez (North Goa). In Salcete (South Goa), approximately another 300 Hindu temples were destroyed by the Christian officials of the Inquisition. Numerous Hindu temples were destroyed elsewhere at Assolna and Cuncolim by Portuguese authorities. A 1569 royal letter in Portuguese archives records that all Hindu temples in its colonies in India had been burnt and razed to the ground. The English traveller Sir Thomas Herbert, 1st Baronet who visited Goa in the 1600s writes:",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 56,
"text": "... as also the ruins of 200 Idol Temples which the Vice-Roy Antonio Norogna totally demolisht, that no memory might remain, or monuments continue, of such gross Idolatry. For not only there, but at Salsette also were two Temples or places of prophane Worship; one of them (by incredible toil cut out of the hard Rock) was divided into three Iles or Galleries, in which were figured many of their deformed Pagotha's, and of which an Indian (if to be credited) reports that there were in that Temple 300 of those narrow Galleries, and the Idols so exceeding ugly as would affright an European Spectator; nevertheless this was a celebrated place, and so abundantly frequented by Idolaters, as induced the Portuguise in zeal with a considerable force to master the Town and to demolish the Temples, breaking in pieces all that monstrous brood of mishapen Pagods. In Goa nothing is more observable now than the fortifications, the Vice-Roy and Arch-bishops Palaces, and the Churches. ...",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 57,
"text": "Dr. Ambedkar and his supporters on 25 December 1927 in the Mahad Satyagraha strongly criticised, condemned and then burned copies of Manusmriti on a pyre in a specially dug pit. Manusmriti, one of the sacred Hindu texts, is the religious basis of casteist laws and values of Hinduism and hence was/is the reason of social and economic plight of crores of untouchables and lower caste Hindus. One of the greatest iconoclasts for all time, this explosive incident rocked the Hindu society. Ambedkarites continue to observe 25 December as \"Manusmriti Dahan Divas\" (Manusmriti Burning Day) and burn copies of Manusmriti on this day.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 58,
"text": "The most high-profile case of Independent India was in 1992. Hindu mob, led by the Vishva Hindu Parishad and Bajrang Dal, destroyed the 430-year-old Islamic Babri Masjid in Ayodhya which is claimed to be built after destroying the Ram Mandir.",
"title": "Iconoclasm in India"
},
{
"paragraph_id": 59,
"text": "There have been a number of anti-Buddhist campaigns in Chinese history that led to the destruction of Buddhist temples and images. One of the most notable of these campaigns was the Great Anti-Buddhist Persecution of the Tang dynasty.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 60,
"text": "During and after the 1911 Xinhai Revolution, there was widespread destruction of religious and secular images in China.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 61,
"text": "During the Northern Expedition in Guangxi in 1926, Kuomintang General Bai Chongxi led his troops in destroying Buddhist temples and smashing Buddhist images, turning the temples into schools and Kuomintang party headquarters. It was reported that almost all of the viharas in Guangxi were destroyed and the monks were removed. Bai also led a wave of anti-foreignism in Guangxi, attacking Americans, Europeans, and other foreigners, and generally making the province unsafe for foreigners and missionaries. Westerners fled from the province and some Chinese Christians were also attacked as imperialist agents. The three goals of the movement were anti-foreignism, anti-imperialism and anti-religion. Bai led the anti-religious movement against superstition. Huang Shaohong, also a Kuomintang member of the New Guangxi clique, supported Bai's campaign. The anti-religious campaign was agreed upon by all Guangxi Kuomintang members.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 62,
"text": "There was extensive destruction of religious and secular imagery in Tibet after it was invaded and occupied by China.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 63,
"text": "Many religious and secular images were destroyed during the Cultural Revolution of 1966–1976, ostensibly because they were a holdover from China's traditional past (which the Communist regime led by Mao Zedong reviled). The Cultural Revolution included widespread destruction of historic artworks in public places and private collections, whether religious or secular. Objects in state museums were mostly left intact.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 64,
"text": "According to an article in Buddhist-Christian Studies:",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 65,
"text": "Over the course of the last decade [1990s] a fairly large number of Buddhist temples in South Korea have been destroyed or damaged by fire by Christian fundamentalists. More recently, Buddhist statues have been identified as idols, and attacked and decapitated in the name of Jesus. Arrests are hard to effect, as the arsonists and vandals work by stealth of night.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 66,
"text": "Beginning c. 1243 AD with the death of Indravarman II, the Khmer Empire went through a period of iconoclasm. At the beginning of the reign of the next king, Jayavarman VIII, the Kingdom went back to Hinduism and the worship of Shiva. Many of the Buddhist images were destroyed by Jayavarman VIII, who reestablished previously Hindu shrines that had been converted to Buddhism by his predecessor. Carvings of the Buddha at temples such as Preah Khan were destroyed, and during this period the Bayon Temple was made a temple to Shiva, with the central 3.6 meter tall statue of the Buddha cast to the bottom of a nearby well.",
"title": "Iconoclasm in East Asia"
},
{
"paragraph_id": 67,
"text": "Revolutions and changes of regime, whether through uprising of the local population, foreign invasion, or a combination of both, are often accompanied by the public destruction of statues and monuments identified with the previous regime. This may also be known as damnatio memoriae, the ancient Roman practice of official obliteration of the memory of a specific individual. Stricter definitions of \"iconoclasm\" exclude both types of action, reserving the term for religious or more widely cultural destruction. In many cases, such as Revolutionary Russia or Ancient Egypt, this distinction can be hard to make.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 68,
"text": "Among Roman emperors and other political figures subject to decrees of damnatio memoriae were Sejanus, Publius Septimius Geta, and Domitian. Several Emperors, such as Domitian and Commodus had during their reigns erected numerous statues of themselves, which were pulled down and destroyed when they were overthrown.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 69,
"text": "The perception of damnatio memoriae in the Classical world was an act of erasing memory has been challenged by scholars who have argued that it \"did not negate historical traces, but created gestures which served to dishonor the record of the person and so, in an oblique way, to confirm memory,\" and was in effect a spectacular display of \"pantomime forgetfulness.\" Examining cases of political monument destruction in modern Irish history, Guy Beiner has demonstrated that iconoclastic vandalism often entails subtle expressions of ambiguous remembrance and that, rather than effacing memory, such acts of de-commemorating effectively preserve memory in obscure forms.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 70,
"text": "Throughout the radical phase of the French Revolution, iconoclasm was supported by members of the government as well as the citizenry. Numerous monuments, religious works, and other historically significant pieces were destroyed in an attempt to eradicate any memory of the Old Regime. A statue of King Louis XV in the Paris square which until then bore his name, was pulled down and destroyed. This was a prelude to the guillotining of his successor Louis XVI in the same site, renamed \"Place de la Révolution\" (at present Place de la Concorde). Later that year, the bodies of many French kings were exhumed from the Basilica of Saint-Denis and dumped in a mass grave.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 71,
"text": "Some episodes of iconoclasm were carried out spontaneously by crowds of citizens, including the destruction of statues of kings during the insurrection of 10 August 1792 in Paris. Some were directly sanctioned by the Republican government, including the Saint-Denis exhumations. Nonetheless, the Republican government also took steps to preserve historic artworks, notably by founding the Louvre museum to house and display the former royal art collection. This allowed the physical objects and national heritage to be preserved while stripping them of their association with the monarchy. Alexandre Lenoir saved many royal monuments by diverting them to preservation in a museum.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 72,
"text": "The statue of Napoleon on the column at Place Vendôme, Paris was also the target of iconoclasm several times: destroyed after the Bourbon Restoration, restored by Louis-Philippe, destroyed during the Paris Commune and restored by Adolphe Thiers.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 73,
"text": "After Napoleon conquered the Italian city of Pavia, local Pavia Jacobins destroyed the Regisole, a bronze classical equestrian monument dating back to Classical times. The Jacobins considered it a symbol of Royal authority, but it had been a prominent Pavia landmark for nearly a thousand years and its destruction aroused much indignation and precipitated a revolt by inhabitants of Pavia against the French, which was quelled by Napoleon after a furious urban fight.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 74,
"text": "Other examples of political destruction of images include:",
"title": "Political iconoclasm"
},
{
"paragraph_id": 75,
"text": "During and after the October Revolution, widespread destruction of religious and secular imagery in Russia took place, as well as the destruction of imagery related to the Imperial family. The Revolution was accompanied by destruction of monuments of tsars, as well as the destruction of imperial eagles at various locations throughout Russia. According to Christopher Wharton:",
"title": "Political iconoclasm"
},
{
"paragraph_id": 76,
"text": "In front of a Moscow Cathedral, crowds cheered as the enormous statue of Tsar Alexander III was bound with ropes and gradually beaten to the ground. After a considerable amount of time, the statue was decapitated and its remaining parts were broken into rubble.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 77,
"text": "The Soviet Union actively destroyed religious sites, including Russian Orthodox churches and Jewish cemeteries, in order to discourage religious practice and curb the activities of religious groups.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 78,
"text": "During the Hungarian Revolution of 1956 and during the Revolutions of 1989, protesters often attacked and took down sculptures and images of Joseph Stalin, such as the Stalin Monument in Budapest.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 79,
"text": "The fall of Communism in 1989–1991 was also followed by the destruction or removal of statues of Vladimir Lenin and other Communist leaders in the former Soviet Union and in other Eastern Bloc countries. Particularly well-known was the destruction of \"Iron Felix\", the statue of Felix Dzerzhinsky outside the KGB's headquarters. Another statue of Dzerzhinsky was destroyed in a Warsaw square that was named after him during communist rule, but which is now called Bank Square.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 80,
"text": "During the American Revolution, the Sons of Liberty pulled down and destroyed the gilded lead statue of George III of the United Kingdom on Bowling Green (New York City), melting it down to be recast as ammunition. Similar acts have accompanied the independence of most ex-colonial territories. Sometimes relatively intact monuments are moved to a collected display in a less prominent place, as in India and also post-Communist countries.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 81,
"text": "In August 2017, a statue of a Confederate soldier dedicated to \"the boys who wore the gray\" was pulled down from its pedestal in front of Durham County Courthouse in North Carolina by protesters. This followed the events at the 2017 Unite the Right rally in response to growing calls to remove Confederate monuments and memorials across the U.S.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 82,
"text": "During the George Floyd protests of 2020, demonstrators pulled down dozens of statues which they considered symbols of the Confederacy, slavery, segregation, or racism, including the statue of Williams Carter Wickham in Richmond, Virginia.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 83,
"text": "Further demonstrations in the wake of the George Floyd protests have resulted in the removal of:",
"title": "Political iconoclasm"
},
{
"paragraph_id": 84,
"text": "Multiple statues of early European explorers and founders were also vandalized, including those of Christopher Columbus, George Washington, and Thomas Jefferson.",
"title": "Political iconoclasm"
},
{
"paragraph_id": 85,
"text": "A statue of the African-American abolitionist statesman Frederick Douglass was vandalised in Rochester, New York, by being torn from its base and left close to a nearby river gorge. Donald Trump attributed the act to anarchists, but he did not substantiate his claim nor did he offer a theory on motive. Cornell William Brooks, former president of the NAACP, theorised that this was an act of revenge from white supremacists. Carvin Eison, who led the project that brought the Douglass statues to Rochester, thought it was unlikely that the Douglass statue was toppled by someone who was upset about monuments honoring Confederate figures, and added that \"it's only logical that it was some kind of retaliation event in someone's mind\". Police did not find evidence that supported or refuted either claim, and the vandalism case remains unsolved.",
"title": "Political iconoclasm"
}
]
| Iconoclasm is the social belief in the importance of the destruction of icons and other images or monuments, most frequently for religious or political reasons. People who engage in or support iconoclasm are called iconoclasts, a term that has come to be figuratively applied to any individual who challenges "cherished beliefs or venerated institutions on the grounds that they are erroneous or pernicious." Conversely, one who reveres or venerates religious images is called an iconolater; in a Byzantine context, such a person is called an iconodule or iconophile. Iconoclasm does not generally encompass the destruction of the images of a specific ruler after his or her death or overthrow, a practice better known as damnatio memoriae. While iconoclasm may be carried out by adherents of a different religion, it is more commonly the result of sectarian disputes between factions of the same religion. The term originates from the Byzantine Iconoclasm, the struggles between proponents and opponents of religious icons in the Byzantine Empire from 726 to 842 AD. Degrees of iconoclasm vary greatly among religions and their branches, but are strongest in religions which oppose idolatry, including the Abrahamic religions. Outside of the religious context, iconoclasm can refer to movements for widespread destruction in symbols of an ideology or cause, such as the destruction of monarchist symbols during the French Revolution. | 2001-09-28T18:10:44Z | 2023-12-30T14:04:34Z | [
"Template:Citation",
"Template:Cbignore",
"Template:Cite report",
"Template:Commons category",
"Template:Main",
"Template:Rp",
"Template:Blockquote",
"Template:Citation needed",
"Template:Cite news",
"Template:Short description",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite journal",
"Template:Doi",
"Template:Wikiquote",
"Template:Redirect",
"Template:Lang-grc",
"Template:Further",
"Template:Lang",
"Template:Reflist",
"Template:Bibleverse",
"Template:Destroyed heritage",
"Template:Heresies condemned by the Catholic Church",
"Template:Cite magazine",
"Template:JSTOR",
"Template:ISBN",
"Template:OCLC",
"Template:For",
"Template:Anchor",
"Template:Circa",
"Template:Notelist",
"Template:Sfn",
"Template:Cite book",
"Template:Wiktionary",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Iconoclasm |
15,086 | IWW (disambiguation) | IWW, or Industrial Workers of the World (known as the Wobblies), are an international union founded in 1905.
IWW may also refer to: | [
{
"paragraph_id": 0,
"text": "IWW, or Industrial Workers of the World (known as the Wobblies), are an international union founded in 1905.",
"title": ""
},
{
"paragraph_id": 1,
"text": "IWW may also refer to:",
"title": ""
}
]
| IWW, or Industrial Workers of the World, are an international union founded in 1905. IWW may also refer to: Industrial WasteWater
Inland waterway, a navigable river, canal, or sound
Irish Whip Wrestling, an Irish-owned independent professional wrestling promotion established in 2002 | 2022-03-28T06:20:17Z | [
"Template:Wiktionary",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/IWW_(disambiguation) |
|
15,087 | Imbolc | Imbolc or Imbolg (Irish pronunciation: [ɪˈmˠɔlˠɡ]), also called Saint Brigid's Day (Irish: Lá Fhéile Bríde; Scottish Gaelic: Là Fhèill Brìghde; Manx: Laa'l Breeshey), is a Gaelic traditional festival. It marks the beginning of spring, and for Christians, it is the feast day of Saint Brigid, Ireland's patroness saint. It is held on 1 February, which is about halfway between the winter solstice and the spring equinox. Historically, its traditions were widely observed throughout Ireland, Scotland and the Isle of Man. Imbolc is one of the four Gaelic seasonal festivals, along with: Beltane, Lughnasadh and Samhain.
Imbolc is mentioned in early Irish literature, and some evidence suggests it was also an important date in ancient times. It is believed that Imbolc was originally a pagan festival associated with the lambing season and the goddess Brigid. Historians suggest that the saint and her feast day are Christianizations of these. The customs of St Brigid's Day did not begin to be recorded in detail until the early modern era. In recent centuries, its traditions have included weaving Brigid's crosses, hung over doors and windows to protect against fire, illness, and evil spirits. People also made a doll of Brigid (a Brídeóg), which was paraded around the community by girls, sometimes accompanied by 'strawboys'. Brigid was said to visit one's home on St Brigid's Eve. To receive her blessings, people would make a bed for Brigid, leave her food and drink, and set items of clothing outside for her to bless. Holy wells would be visited, a special meal would be had, and the day was traditionally linked with weather lore.
Although many of its traditions died out in the 20th century, it is still observed by some Christians as a religious holiday and by some non-Christians as a cultural one, and its customs have been revived in some places. Since the later 20th century, Celtic neopagans and Wiccans have observed Imbolc as a religious holiday. Since 2023, "Imbolc/St Brigid's Day" has been an annual public holiday in the Republic of Ireland.
Historians such as Ronald Hutton argue that the festival must have pre-Christian origins. Some scholars argue that the date of Imbolc was significant in Ireland since the Neolithic. A few passage tombs in Ireland are aligned with the sunrise around the times of Imbolc and Samhain. This includes the Mound of the Hostages on the Hill of Tara, and Cairn L at Slieve na Calliagh. Frank Prendergast argues that this alignment is so rare that it is a product of chance.
The etymology of Imbolc or Imbolg is unclear. A common explanation is that it comes from the Old Irish i mbolc (Modern Irish: i mbolg), meaning 'in the belly', and refers to the pregnancy of ewes at this time of year. Joseph Vendryes linked it to the Old Irish verb folcaim, 'to wash/cleanse oneself'. He suggested that it referred to a ritual cleansing, similar to the ancient Roman festival Februa or Lupercalia, which took place at the same time of year. Eric P. Hamp derives it from a Proto-Indo-European root meaning both 'milk' and 'cleansing'. Professor Alan Ward derives it from the Proto-Celtic *embibolgon, 'budding'. The early 10th century Cormac's Glossary has an entry for Oímelc, calling it the beginning of spring and deriving it from oí-melg ('ewe milk'), explaining it as "the time that sheep's milk comes". However, linguists believe this is the writer's respelling of the word to give it an understandable etymology.
The Táin Bó Cúailnge ('Cattle Raid of Cooley') indicates that Imbolc (spelt imolg) is three months after the 1 November festival of Samhain. Imbolc is mentioned in another Old Irish poem about the Táin in the Metrical Dindshenchas: "iar n-imbulc, ba garb a ngeilt", which Edward Gwynn translates "after Candlemas, rough was their herding". Candlemas is the Christian holy day which falls on 2 February and is known in Irish as Lá Fhéile Muire na gCoinneal, 'feast day of Mary of the Candles'.
Hutton writes that Imbolc must have been "important enough for its date to be dedicated subsequently to Brigid … the Mother Saint of Ireland". Cogitosus, writing in the late 7th century, first mentions a feast day of Saint Brigid being observed in Kildare on 1 February. Brigid is said to have lived in the 6th century and founded the important monastery of Kildare. She became the focus of a major cult. However, there are few historical facts about her, and her early hagiographies "are mainly anecdotes and miracle stories, some of which are deeply rooted in Irish pagan folklore". It is suggested that Saint Brigid is based on the goddess Brigid, or that she was a real person and the lore of the goddess was transferred to her. Like the saint, the goddess is associated with wisdom, poetry, healing, protection, blacksmithing, and domesticated animals, according to Cormac's Glossary and Lebor Gabála Érenn. It is suggested that the festival, which celebrates the start of lambing, is linked with Brigid in her role as a fertility goddess. Hutton says that the goddess might have already been linked to Imbolc and this was continued by making it the saint's feast day. Or it could be that Imbolc's association with milk drew the saint to it because of a legend that she had been the wet-nurse of Christ.
The festival of Imbolc is mentioned in several early Irish manuscripts, but they say very little about its original rites and customs. Imbolc was one of four main seasonal festivals in Gaelic Ireland, along with Beltane (1 May), Lughnasadh (1 August) and Samhain (1 November). The tale Tochmarc Emire, which survives in a 10th-century version, names Imbolc as one of four seasonal festivals, and says it is "when the ewes are milked at spring's beginning". This linking of Imbolc with the arrival of lambs and sheep's milk probably reflected farming customs that ensured lambs were born before calves. In late winter/early spring, sheep could survive better than cows on the sparse vegetation, and farmers sought to resume milking as soon as possible due to their dwindling stores. The Hibernica Minora includes an Old Irish poem about the four seasonal festivals. Translated by Kuno Meyer (1894), it says, "Tasting of each food according to order, this is what is proper at Imbolc: washing the hands, the feet, the head". This suggests ritual cleansing. It has been suggested that originally the timing of the festival was more fluid and associated with the onset of the lambing season, the beginning of preparations for the spring sowing, and the blooming of blackthorn.
Prominent folklorist Seán Ó Súilleabháin wrote: "The main significance of the Feast of St. Brigid would seem to be that it was a Christianisation of one of the focal points of the agricultural year in Ireland, the starting point of preparations for the spring sowing. Every manifestation of the cult of the saint (or of the deity she replaced) is bound up in some way with food production".
From the 18th century to the mid-20th century, many St Brigid's Day traditions were recorded by folklorists and other writers. They tell us how it was celebrated then and shed light on how it may have been celebrated in the past.
In Ireland, Brigid's crosses (pictured) are traditionally made on St Brigid's Day. A Brigid's cross usually consists of rushes woven into a four-armed equilateral cross, although there were also three-armed crosses. They are traditionally hung over doors, windows, and stables to welcome Brigid and for protection against fire, lightning, illness, and evil spirits. The crosses are generally left until the next St Brigid's Day. In western Connacht, people made a Crios Bríde (Bríd's girdle); a great ring of rushes with a cross woven in the middle. Young boys would carry it around the village, inviting people to step through it and be blessed.
On St Brigid's Eve, Brigid was said to visit virtuous households and bless the inhabitants. As Brigid represented the light half of the year and the power that will bring people from the dark season of winter into spring, her presence was vital at this time of year.
Before going to bed, people would leave items of clothing or strips of cloth outside for Brigid to bless. The next morning, they would be brought inside and believed to have powers of healing and protection.
Brigid would be symbolically invited into the house and a bed would often be made for her. In Ulster, a family member representing Brigid would circle the home three times carrying rushes. They would knock the door three times, asking to be let in. On the third attempt, they are welcomed in, a meal is had, and the rushes are then made into crosses or a bed for Brigid. In 18th-century Mann, the custom was to stand at the door with a bundle of rushes and say "Brede, Brede, come to my house tonight. Open the door for Brede and let Brede come in". Similarly, in County Donegal, the family member who was sent to fetch the rushes knelt on the front step and repeated three times, "Go on your knees, open your eyes, and let in St Brigid". Those inside the house answered three times, "She's welcome". The rushes were then strewn on the floor as a carpet or bed for Brigid. In the 19th century, some old Manx women would make a bed for Brigid in the barn with food, ale, and a candle on a table. The custom of making Brigid's bed was prevalent in the Hebrides of Scotland, where it was recorded as far back as the 17th century. A bed of hay or a basket-like cradle would be made for Brigid. Someone would then call out three times: "a Bhríd, a Bhríd, thig a stigh as gabh do leabaidh" ("Bríd Bríd, come in; thy bed is ready"). A corn dolly called the dealbh Bríde (icon of Brigid) would be laid in the bed and a white wand, usually made of birch, would be laid beside it. It represented the wand that Brigid was said to use to make the vegetation start growing again. Women in some parts of the Hebrides would also dance while holding a large cloth and calling out "Bridean, Bridean, thig an nall 's dean do leabaidh" ("Bríd, Bríd, come over and make your bed").
In the Outer Hebrides, ashes from the fire would be raked smooth, and, in the morning, they would look for some mark on the ashes as a sign that Brigid had visited. If there was no mark, they believed bad fortune would come unless they buried a cockerel at the meeting of three streams as an offering and burned incense on their fire that night.
In Ireland and Scotland, a representation of Brigid would be paraded around the community by girls and young women. Usually, it was a doll known as a Brídeóg ('little Brigid'), called a 'Breedhoge' or 'Biddy' in English. It would be made from rushes or reeds and clad in bits of cloth, flowers, or shells. In the Hebrides of Scotland, a bright shell or crystal called the reul-iuil Bríde (guiding star of Brigid) was set on its chest. The girls would carry it in procession while singing a hymn to Brigid. All wore white with their hair unbound as a symbol of purity and youth. They visited every house in the area, where they received either food or more decoration for the Brídeóg. Afterward, they feasted in a house with the Brídeóg set in a place of honour, and put it to bed with lullabies. When the meal was done, the local young men humbly asked for admission, made obeisance to the Brídeóg, and joined the girls in dancing and merrymaking. In many places, only unwed girls could carry the Brídeóg, but in some both boys and girls carried it.
In parts of Ireland, rather than carrying a Brídeóg, a girl took on the role of Brigid. Escorted by other girls, she went house-to-house wearing 'Brigid's crown' and carrying 'Brigid's shield' and 'Brigid's cross', all made from rushes. The procession in some places included 'strawboys', who wore conical straw hats, masks and played folk music; much like the wrenboys. Up until the mid-20th century, children in Ireland still went house-to-house asking for pennies for "poor Biddy", or money for the poor. In County Kerry, men in white robes sang from house to house.
The festival is traditionally associated with weather lore, and the old tradition of watching to see if serpents or badgers came from their winter dens may be a forerunner of the North American Groundhog Day. A Scottish Gaelic proverb about the day is:
Imbolc was believed to be when the Cailleach—the divine hag of Gaelic tradition—gathers her firewood for the rest of the winter. Legend has it that if she wishes to make the winter last a good while longer, she will make sure the weather on Imbolc is bright and sunny so that she can gather plenty of firewood. Therefore, people would be relieved if Imbolc is a day of foul weather, as it means the Cailleach is asleep and winter is almost over. At Imbolc on the Isle of Man, where she is known as Caillagh ny Groamagh, the Cailleach is said to take the form of a gigantic bird carrying sticks in her beak.
Families would have a special meal or supper on St Brigid's Eve to mark the last night of winter. This typically included food such as colcannon, sowans, dumplings, barmbrack or bannocks. Often, some of the food and drink would be set aside for Brigid.
In Ireland, a spring cleaning was customary around St Brigid's Day.
People traditionally visit holy wells and pray for health while walking 'sunwise' around the well. They might then leave offerings, typically coins or strips of cloth/ribbon (see clootie well). Historically, water from the well was used to bless the home, family members, livestock, and fields.
Scottish writer Donald Alexander Mackenzie also recorded in the 19th century that offerings were made "to earth and sea". The offering could be milk poured into the ground or porridge poured into the water as a libation.
In County Kilkenny, graves were decorated with box and laurel flowers (or any other flowers that could be found at that time). A Branch of Virginity was decorated with white ribbons and placed on the grave of a recently deceased maiden.
Today, St Brigid's Day and Imbolc are observed by Christians and non-Christians. Some people still make Brigid's crosses and Brídeogs or visit holy wells dedicated to St Brigid on 1 February. Brigid's Day parades have been revived in the town of Killorglin, County Kerry, which holds a yearly "Biddy's Day Festival". Men and women wearing elaborate straw hats and masks visit public houses carrying a Brídeóg to ward off evil spirits and bring good luck for the coming year. There are folk music sessions, historical talks, film screenings, drama productions, and cross-weaving workshops. The main event is a torchlight parade of 'Biddy groups' through the town. Since 2009 a yearly "Brigid of Faughart Festival" is held in County Louth. This celebrates Brigid as both saint and goddess and includes the long-standing pilgrimage to Faughart as well as music, poetry, and lectures.
The "Imbolc International Music Festival" of folk music is held in Derry at this time of year. In England, the village of Marsden, West Yorkshire holds a biennial "Imbolc Fire Festival" which includes a lantern procession, fire performers, music, fireworks, and a symbolic battle between giant characters representing the Green Man and Jack Frost.
More recently, Irish embassies have hosted yearly events on St Brigid's Day to celebrate famous women of the Irish diaspora and showcase the work of Irish female emigrants in the arts. In 2022, Dublin hosted its first "Brigit Festival", celebrating "the contributions of Irish women" past and present through exhibitions, tours, lectures, films, and a concert.
From 2023, "Imbolc/St Brigid's Day" will be a yearly public holiday in the Republic of Ireland to mark both the saint's feast day and the seasonal festival. A government statement noted that it would be the first Irish public holiday named after a woman, and "means that all four of the traditional Celtic seasonal festival will now be public holidays".
Imbolc or Imbolc-based festivals are observed by some Neopagans, though practices vary widely. While some attempt to closely emulate the historic accounts of Imbolc, others rely on many sources to inspire their celebrations. Festivals typically fall near 1 February in the Northern Hemisphere and 1 August in the Southern Hemisphere.
Some Neopagans celebrate the festival at the astronomical midpoint between the winter solstice and spring equinox, while others rely on the full moon nearest this point. In the Northern Hemisphere, this is usually on 3 or 4 February. Some Neopagans designate Imbolc based on other natural phenomena, such as the emergence of primroses, dandelions, or similar local flora.
Celtic Reconstructionists strive to reconstruct ancient Celtic religion. Their religious practices are based on research and historical accounts, but may be modified slightly to suit modern life. They avoid syncretism (i.e., combining practises from different cultures). They usually celebrate the festival when the first stirrings of spring are felt or on the full moon nearest this. Many use traditional songs and rites from sources such as The Silver Bough and The Carmina Gadelica. It is a time of honouring the goddess Brigid, and many of her dedicants choose this time of year for rituals to her.
Wiccans and Neo-Druids celebrate Imbolc as one of the eight Sabbats in their Wheel of the Year, following Midwinter and preceding Ostara. In Wicca, Imbolc is commonly associated with the goddess Brigid; as such, it is sometimes seen as a "women's holiday" with specific rites only for female members of a coven. Among Dianic Wiccans, Imbolc is the traditional time for initiations. | [
{
"paragraph_id": 0,
"text": "Imbolc or Imbolg (Irish pronunciation: [ɪˈmˠɔlˠɡ]), also called Saint Brigid's Day (Irish: Lá Fhéile Bríde; Scottish Gaelic: Là Fhèill Brìghde; Manx: Laa'l Breeshey), is a Gaelic traditional festival. It marks the beginning of spring, and for Christians, it is the feast day of Saint Brigid, Ireland's patroness saint. It is held on 1 February, which is about halfway between the winter solstice and the spring equinox. Historically, its traditions were widely observed throughout Ireland, Scotland and the Isle of Man. Imbolc is one of the four Gaelic seasonal festivals, along with: Beltane, Lughnasadh and Samhain.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Imbolc is mentioned in early Irish literature, and some evidence suggests it was also an important date in ancient times. It is believed that Imbolc was originally a pagan festival associated with the lambing season and the goddess Brigid. Historians suggest that the saint and her feast day are Christianizations of these. The customs of St Brigid's Day did not begin to be recorded in detail until the early modern era. In recent centuries, its traditions have included weaving Brigid's crosses, hung over doors and windows to protect against fire, illness, and evil spirits. People also made a doll of Brigid (a Brídeóg), which was paraded around the community by girls, sometimes accompanied by 'strawboys'. Brigid was said to visit one's home on St Brigid's Eve. To receive her blessings, people would make a bed for Brigid, leave her food and drink, and set items of clothing outside for her to bless. Holy wells would be visited, a special meal would be had, and the day was traditionally linked with weather lore.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Although many of its traditions died out in the 20th century, it is still observed by some Christians as a religious holiday and by some non-Christians as a cultural one, and its customs have been revived in some places. Since the later 20th century, Celtic neopagans and Wiccans have observed Imbolc as a religious holiday. Since 2023, \"Imbolc/St Brigid's Day\" has been an annual public holiday in the Republic of Ireland.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Historians such as Ronald Hutton argue that the festival must have pre-Christian origins. Some scholars argue that the date of Imbolc was significant in Ireland since the Neolithic. A few passage tombs in Ireland are aligned with the sunrise around the times of Imbolc and Samhain. This includes the Mound of the Hostages on the Hill of Tara, and Cairn L at Slieve na Calliagh. Frank Prendergast argues that this alignment is so rare that it is a product of chance.",
"title": "Origins and etymology"
},
{
"paragraph_id": 4,
"text": "The etymology of Imbolc or Imbolg is unclear. A common explanation is that it comes from the Old Irish i mbolc (Modern Irish: i mbolg), meaning 'in the belly', and refers to the pregnancy of ewes at this time of year. Joseph Vendryes linked it to the Old Irish verb folcaim, 'to wash/cleanse oneself'. He suggested that it referred to a ritual cleansing, similar to the ancient Roman festival Februa or Lupercalia, which took place at the same time of year. Eric P. Hamp derives it from a Proto-Indo-European root meaning both 'milk' and 'cleansing'. Professor Alan Ward derives it from the Proto-Celtic *embibolgon, 'budding'. The early 10th century Cormac's Glossary has an entry for Oímelc, calling it the beginning of spring and deriving it from oí-melg ('ewe milk'), explaining it as \"the time that sheep's milk comes\". However, linguists believe this is the writer's respelling of the word to give it an understandable etymology.",
"title": "Origins and etymology"
},
{
"paragraph_id": 5,
"text": "The Táin Bó Cúailnge ('Cattle Raid of Cooley') indicates that Imbolc (spelt imolg) is three months after the 1 November festival of Samhain. Imbolc is mentioned in another Old Irish poem about the Táin in the Metrical Dindshenchas: \"iar n-imbulc, ba garb a ngeilt\", which Edward Gwynn translates \"after Candlemas, rough was their herding\". Candlemas is the Christian holy day which falls on 2 February and is known in Irish as Lá Fhéile Muire na gCoinneal, 'feast day of Mary of the Candles'.",
"title": "Origins and etymology"
},
{
"paragraph_id": 6,
"text": "Hutton writes that Imbolc must have been \"important enough for its date to be dedicated subsequently to Brigid … the Mother Saint of Ireland\". Cogitosus, writing in the late 7th century, first mentions a feast day of Saint Brigid being observed in Kildare on 1 February. Brigid is said to have lived in the 6th century and founded the important monastery of Kildare. She became the focus of a major cult. However, there are few historical facts about her, and her early hagiographies \"are mainly anecdotes and miracle stories, some of which are deeply rooted in Irish pagan folklore\". It is suggested that Saint Brigid is based on the goddess Brigid, or that she was a real person and the lore of the goddess was transferred to her. Like the saint, the goddess is associated with wisdom, poetry, healing, protection, blacksmithing, and domesticated animals, according to Cormac's Glossary and Lebor Gabála Érenn. It is suggested that the festival, which celebrates the start of lambing, is linked with Brigid in her role as a fertility goddess. Hutton says that the goddess might have already been linked to Imbolc and this was continued by making it the saint's feast day. Or it could be that Imbolc's association with milk drew the saint to it because of a legend that she had been the wet-nurse of Christ.",
"title": "Origins and etymology"
},
{
"paragraph_id": 7,
"text": "The festival of Imbolc is mentioned in several early Irish manuscripts, but they say very little about its original rites and customs. Imbolc was one of four main seasonal festivals in Gaelic Ireland, along with Beltane (1 May), Lughnasadh (1 August) and Samhain (1 November). The tale Tochmarc Emire, which survives in a 10th-century version, names Imbolc as one of four seasonal festivals, and says it is \"when the ewes are milked at spring's beginning\". This linking of Imbolc with the arrival of lambs and sheep's milk probably reflected farming customs that ensured lambs were born before calves. In late winter/early spring, sheep could survive better than cows on the sparse vegetation, and farmers sought to resume milking as soon as possible due to their dwindling stores. The Hibernica Minora includes an Old Irish poem about the four seasonal festivals. Translated by Kuno Meyer (1894), it says, \"Tasting of each food according to order, this is what is proper at Imbolc: washing the hands, the feet, the head\". This suggests ritual cleansing. It has been suggested that originally the timing of the festival was more fluid and associated with the onset of the lambing season, the beginning of preparations for the spring sowing, and the blooming of blackthorn.",
"title": "Historic customs"
},
{
"paragraph_id": 8,
"text": "Prominent folklorist Seán Ó Súilleabháin wrote: \"The main significance of the Feast of St. Brigid would seem to be that it was a Christianisation of one of the focal points of the agricultural year in Ireland, the starting point of preparations for the spring sowing. Every manifestation of the cult of the saint (or of the deity she replaced) is bound up in some way with food production\".",
"title": "Historic customs"
},
{
"paragraph_id": 9,
"text": "From the 18th century to the mid-20th century, many St Brigid's Day traditions were recorded by folklorists and other writers. They tell us how it was celebrated then and shed light on how it may have been celebrated in the past.",
"title": "Historic customs"
},
{
"paragraph_id": 10,
"text": "In Ireland, Brigid's crosses (pictured) are traditionally made on St Brigid's Day. A Brigid's cross usually consists of rushes woven into a four-armed equilateral cross, although there were also three-armed crosses. They are traditionally hung over doors, windows, and stables to welcome Brigid and for protection against fire, lightning, illness, and evil spirits. The crosses are generally left until the next St Brigid's Day. In western Connacht, people made a Crios Bríde (Bríd's girdle); a great ring of rushes with a cross woven in the middle. Young boys would carry it around the village, inviting people to step through it and be blessed.",
"title": "Historic customs"
},
{
"paragraph_id": 11,
"text": "On St Brigid's Eve, Brigid was said to visit virtuous households and bless the inhabitants. As Brigid represented the light half of the year and the power that will bring people from the dark season of winter into spring, her presence was vital at this time of year.",
"title": "Historic customs"
},
{
"paragraph_id": 12,
"text": "Before going to bed, people would leave items of clothing or strips of cloth outside for Brigid to bless. The next morning, they would be brought inside and believed to have powers of healing and protection.",
"title": "Historic customs"
},
{
"paragraph_id": 13,
"text": "Brigid would be symbolically invited into the house and a bed would often be made for her. In Ulster, a family member representing Brigid would circle the home three times carrying rushes. They would knock the door three times, asking to be let in. On the third attempt, they are welcomed in, a meal is had, and the rushes are then made into crosses or a bed for Brigid. In 18th-century Mann, the custom was to stand at the door with a bundle of rushes and say \"Brede, Brede, come to my house tonight. Open the door for Brede and let Brede come in\". Similarly, in County Donegal, the family member who was sent to fetch the rushes knelt on the front step and repeated three times, \"Go on your knees, open your eyes, and let in St Brigid\". Those inside the house answered three times, \"She's welcome\". The rushes were then strewn on the floor as a carpet or bed for Brigid. In the 19th century, some old Manx women would make a bed for Brigid in the barn with food, ale, and a candle on a table. The custom of making Brigid's bed was prevalent in the Hebrides of Scotland, where it was recorded as far back as the 17th century. A bed of hay or a basket-like cradle would be made for Brigid. Someone would then call out three times: \"a Bhríd, a Bhríd, thig a stigh as gabh do leabaidh\" (\"Bríd Bríd, come in; thy bed is ready\"). A corn dolly called the dealbh Bríde (icon of Brigid) would be laid in the bed and a white wand, usually made of birch, would be laid beside it. It represented the wand that Brigid was said to use to make the vegetation start growing again. Women in some parts of the Hebrides would also dance while holding a large cloth and calling out \"Bridean, Bridean, thig an nall 's dean do leabaidh\" (\"Bríd, Bríd, come over and make your bed\").",
"title": "Historic customs"
},
{
"paragraph_id": 14,
"text": "In the Outer Hebrides, ashes from the fire would be raked smooth, and, in the morning, they would look for some mark on the ashes as a sign that Brigid had visited. If there was no mark, they believed bad fortune would come unless they buried a cockerel at the meeting of three streams as an offering and burned incense on their fire that night.",
"title": "Historic customs"
},
{
"paragraph_id": 15,
"text": "In Ireland and Scotland, a representation of Brigid would be paraded around the community by girls and young women. Usually, it was a doll known as a Brídeóg ('little Brigid'), called a 'Breedhoge' or 'Biddy' in English. It would be made from rushes or reeds and clad in bits of cloth, flowers, or shells. In the Hebrides of Scotland, a bright shell or crystal called the reul-iuil Bríde (guiding star of Brigid) was set on its chest. The girls would carry it in procession while singing a hymn to Brigid. All wore white with their hair unbound as a symbol of purity and youth. They visited every house in the area, where they received either food or more decoration for the Brídeóg. Afterward, they feasted in a house with the Brídeóg set in a place of honour, and put it to bed with lullabies. When the meal was done, the local young men humbly asked for admission, made obeisance to the Brídeóg, and joined the girls in dancing and merrymaking. In many places, only unwed girls could carry the Brídeóg, but in some both boys and girls carried it.",
"title": "Historic customs"
},
{
"paragraph_id": 16,
"text": "In parts of Ireland, rather than carrying a Brídeóg, a girl took on the role of Brigid. Escorted by other girls, she went house-to-house wearing 'Brigid's crown' and carrying 'Brigid's shield' and 'Brigid's cross', all made from rushes. The procession in some places included 'strawboys', who wore conical straw hats, masks and played folk music; much like the wrenboys. Up until the mid-20th century, children in Ireland still went house-to-house asking for pennies for \"poor Biddy\", or money for the poor. In County Kerry, men in white robes sang from house to house.",
"title": "Historic customs"
},
{
"paragraph_id": 17,
"text": "The festival is traditionally associated with weather lore, and the old tradition of watching to see if serpents or badgers came from their winter dens may be a forerunner of the North American Groundhog Day. A Scottish Gaelic proverb about the day is:",
"title": "Historic customs"
},
{
"paragraph_id": 18,
"text": "Imbolc was believed to be when the Cailleach—the divine hag of Gaelic tradition—gathers her firewood for the rest of the winter. Legend has it that if she wishes to make the winter last a good while longer, she will make sure the weather on Imbolc is bright and sunny so that she can gather plenty of firewood. Therefore, people would be relieved if Imbolc is a day of foul weather, as it means the Cailleach is asleep and winter is almost over. At Imbolc on the Isle of Man, where she is known as Caillagh ny Groamagh, the Cailleach is said to take the form of a gigantic bird carrying sticks in her beak.",
"title": "Historic customs"
},
{
"paragraph_id": 19,
"text": "Families would have a special meal or supper on St Brigid's Eve to mark the last night of winter. This typically included food such as colcannon, sowans, dumplings, barmbrack or bannocks. Often, some of the food and drink would be set aside for Brigid.",
"title": "Historic customs"
},
{
"paragraph_id": 20,
"text": "In Ireland, a spring cleaning was customary around St Brigid's Day.",
"title": "Historic customs"
},
{
"paragraph_id": 21,
"text": "People traditionally visit holy wells and pray for health while walking 'sunwise' around the well. They might then leave offerings, typically coins or strips of cloth/ribbon (see clootie well). Historically, water from the well was used to bless the home, family members, livestock, and fields.",
"title": "Historic customs"
},
{
"paragraph_id": 22,
"text": "Scottish writer Donald Alexander Mackenzie also recorded in the 19th century that offerings were made \"to earth and sea\". The offering could be milk poured into the ground or porridge poured into the water as a libation.",
"title": "Historic customs"
},
{
"paragraph_id": 23,
"text": "In County Kilkenny, graves were decorated with box and laurel flowers (or any other flowers that could be found at that time). A Branch of Virginity was decorated with white ribbons and placed on the grave of a recently deceased maiden.",
"title": "Historic customs"
},
{
"paragraph_id": 24,
"text": "Today, St Brigid's Day and Imbolc are observed by Christians and non-Christians. Some people still make Brigid's crosses and Brídeogs or visit holy wells dedicated to St Brigid on 1 February. Brigid's Day parades have been revived in the town of Killorglin, County Kerry, which holds a yearly \"Biddy's Day Festival\". Men and women wearing elaborate straw hats and masks visit public houses carrying a Brídeóg to ward off evil spirits and bring good luck for the coming year. There are folk music sessions, historical talks, film screenings, drama productions, and cross-weaving workshops. The main event is a torchlight parade of 'Biddy groups' through the town. Since 2009 a yearly \"Brigid of Faughart Festival\" is held in County Louth. This celebrates Brigid as both saint and goddess and includes the long-standing pilgrimage to Faughart as well as music, poetry, and lectures.",
"title": "Today"
},
{
"paragraph_id": 25,
"text": "The \"Imbolc International Music Festival\" of folk music is held in Derry at this time of year. In England, the village of Marsden, West Yorkshire holds a biennial \"Imbolc Fire Festival\" which includes a lantern procession, fire performers, music, fireworks, and a symbolic battle between giant characters representing the Green Man and Jack Frost.",
"title": "Today"
},
{
"paragraph_id": 26,
"text": "More recently, Irish embassies have hosted yearly events on St Brigid's Day to celebrate famous women of the Irish diaspora and showcase the work of Irish female emigrants in the arts. In 2022, Dublin hosted its first \"Brigit Festival\", celebrating \"the contributions of Irish women\" past and present through exhibitions, tours, lectures, films, and a concert.",
"title": "Today"
},
{
"paragraph_id": 27,
"text": "From 2023, \"Imbolc/St Brigid's Day\" will be a yearly public holiday in the Republic of Ireland to mark both the saint's feast day and the seasonal festival. A government statement noted that it would be the first Irish public holiday named after a woman, and \"means that all four of the traditional Celtic seasonal festival will now be public holidays\".",
"title": "Today"
},
{
"paragraph_id": 28,
"text": "Imbolc or Imbolc-based festivals are observed by some Neopagans, though practices vary widely. While some attempt to closely emulate the historic accounts of Imbolc, others rely on many sources to inspire their celebrations. Festivals typically fall near 1 February in the Northern Hemisphere and 1 August in the Southern Hemisphere.",
"title": "Today"
},
{
"paragraph_id": 29,
"text": "Some Neopagans celebrate the festival at the astronomical midpoint between the winter solstice and spring equinox, while others rely on the full moon nearest this point. In the Northern Hemisphere, this is usually on 3 or 4 February. Some Neopagans designate Imbolc based on other natural phenomena, such as the emergence of primroses, dandelions, or similar local flora.",
"title": "Today"
},
{
"paragraph_id": 30,
"text": "Celtic Reconstructionists strive to reconstruct ancient Celtic religion. Their religious practices are based on research and historical accounts, but may be modified slightly to suit modern life. They avoid syncretism (i.e., combining practises from different cultures). They usually celebrate the festival when the first stirrings of spring are felt or on the full moon nearest this. Many use traditional songs and rites from sources such as The Silver Bough and The Carmina Gadelica. It is a time of honouring the goddess Brigid, and many of her dedicants choose this time of year for rituals to her.",
"title": "Today"
},
{
"paragraph_id": 31,
"text": "Wiccans and Neo-Druids celebrate Imbolc as one of the eight Sabbats in their Wheel of the Year, following Midwinter and preceding Ostara. In Wicca, Imbolc is commonly associated with the goddess Brigid; as such, it is sometimes seen as a \"women's holiday\" with specific rites only for female members of a coven. Among Dianic Wiccans, Imbolc is the traditional time for initiations.",
"title": "Today"
}
]
| Imbolc or Imbolg, also called Saint Brigid's Day, is a Gaelic traditional festival. It marks the beginning of spring, and for Christians, it is the feast day of Saint Brigid, Ireland's patroness saint. It is held on 1 February, which is about halfway between the winter solstice and the spring equinox. Historically, its traditions were widely observed throughout Ireland, Scotland and the Isle of Man. Imbolc is one of the four Gaelic seasonal festivals, along with: Beltane, Lughnasadh and Samhain. Imbolc is mentioned in early Irish literature, and some evidence suggests it was also an important date in ancient times. It is believed that Imbolc was originally a pagan festival associated with the lambing season and the goddess Brigid. Historians suggest that the saint and her feast day are Christianizations of these. The customs of St Brigid's Day did not begin to be recorded in detail until the early modern era. In recent centuries, its traditions have included weaving Brigid's crosses, hung over doors and windows to protect against fire, illness, and evil spirits. People also made a doll of Brigid, which was paraded around the community by girls, sometimes accompanied by 'strawboys'. Brigid was said to visit one's home on St Brigid's Eve. To receive her blessings, people would make a bed for Brigid, leave her food and drink, and set items of clothing outside for her to bless. Holy wells would be visited, a special meal would be had, and the day was traditionally linked with weather lore. Although many of its traditions died out in the 20th century, it is still observed by some Christians as a religious holiday and by some non-Christians as a cultural one, and its customs have been revived in some places. Since the later 20th century, Celtic neopagans and Wiccans have observed Imbolc as a religious holiday. Since 2023, "Imbolc/St Brigid's Day" has been an annual public holiday in the Republic of Ireland. | 2001-09-28T18:13:54Z | 2023-12-31T22:27:32Z | [
"Template:Harvnb",
"Template:Wiktionary",
"Template:ISBN",
"Template:Celtic mythology topics",
"Template:Lang-gv",
"Template:Reflist",
"Template:Cite news",
"Template:Public holidays in the Republic of Ireland",
"Template:Lang-gd",
"Template:Cite book",
"Template:Verse translation",
"Template:Portal",
"Template:Lang-ga",
"Template:Unreliable source?",
"Template:Infobox holiday",
"Template:Cite web",
"Template:Contemporary witchcraft",
"Template:Use Hiberno-English",
"Template:Lang",
"Template:Celts",
"Template:Wheel of the Year",
"Template:Authority control",
"Template:Use dmy dates",
"Template:TOC limit",
"Template:Cite journal",
"Template:Ireland topics",
"Template:Short description",
"Template:IPA-ga"
]
| https://en.wikipedia.org/wiki/Imbolc |
15,088 | Isaiah | Isaiah (UK: /aɪˈzaɪ.ə/ or US: /aɪˈzeɪ.ə/; Hebrew: יְשַׁעְיָהוּ, Yəšaʿyāhū, "Yahweh is Salvation"; also known as Isaias or Esaias from Greek: Ἠσαΐας) was the 8th-century BC Israelite prophet after whom the Book of Isaiah is named.
Within the text of the Book of Isaiah, Isaiah is referred to as "the prophet", but the exact relationship between the Book of Isaiah and the actual prophet Isaiah is complicated. The traditional view is that all 66 chapters of the book of Isaiah were written by one man, Isaiah, possibly in two periods between 740 BC and c. 686 BC, separated by approximately 15 years.
Another widely held view is that parts of the first half of the book (chapters 1–39) originated with the historical prophet, interspersed with prose commentaries written in the time of King Josiah 100 years later, and that the remainder of the book dates from immediately before and immediately after the end of the exile in Babylon, almost two centuries after the time of the historical prophet, and perhaps these later chapters represent the work of an ongoing school of prophets who prophesied in accordance with his prophecies.
The first verse of the Book of Isaiah states that Isaiah prophesied during the reigns of Uzziah (or Azariah), Jotham, Ahaz, and Hezekiah, the kings of Judah. Uzziah's reign was 52 years in the middle of the 8th century BC, and Isaiah must have begun his ministry a few years before Uzziah's death, probably in the 740s BC. He may have been contemporary for some years with Manasseh. Thus, Isaiah may have prophesied for as long as 64 years.
According to some modern interpretations, Isaiah's wife was called "the prophetess", either because she was endowed with the prophetic gift, like Deborah and Huldah, or simply because she was the "wife of the prophet". They had two sons, naming the elder Shear-Jashub, meaning "A remnant shall return", and the younger Maher-Shalal-Hash-Baz, meaning, "Quickly to spoils, plunder speedily."
Soon after this, Shalmaneser V determined to subdue the northern Kingdom of Israel, taking over and destroying Samaria and beginning the Assyrian captivity. So long as Ahaz reigned, the kingdom of Judah was untouched by the Assyrian power. But when Hezekiah gained the throne, he was encouraged to rebel "against the king of Assyria", and entered into an alliance with the king of Egypt. The king of Assyria threatened the king of Judah, and at length invaded the land. Sennacherib's campaign in the Levant brought his powerful army into Judah. Hezekiah was reduced to despair, and submitted to the Assyrians. But after a brief interval, war broke out again. Again Sennacherib led an army into Judah, one detachment of which threatened Jerusalem. Isaiah on that occasion encouraged Hezekiah to resist the Assyrians, whereupon Sennacherib sent a threatening letter to Hezekiah, which he "spread before the LORD".
Then Isaiah son of Amoz sent this message to Hezekiah: “Thus said GOD, the God of Israel, to whom you have prayed, concerning King Sennacherib of Assyria—
this is the word that GOD has spoken concerning him: Fair Maiden Zion despises you, She mocks at you; Fair Jerusalem shakes Her head at you. Whom have you blasphemed and reviled? Against whom made loud your voice And haughtily raised your eyes?
Against the Holy One of Israel!
According to the account in 2 Kings 19 (and its derivative account in 2 Chronicles 32) an angel of God fell on the Assyrian army and 185,000 of its men were killed in one night. "Like Xerxes in Greece, Sennacherib never recovered from the shock of the disaster in Judah. He made no more expeditions against either Southern Palestine or Egypt."
The remaining years of Hezekiah's reign were peaceful. Isaiah probably lived to its close, and possibly into the reign of Manasseh. The time and manner of his death are not specified in either the Bible or other primary sources. The Talmud says that he suffered martyrdom by being sawn in two under the orders of Manasseh.
The book of Isaiah, along with the book of Jeremiah, is distinctive in the Hebrew bible for its direct portrayal of the "wrath of the LORD" as presented, for example, in Isaiah 9:19 stating "Through the wrath of the LORD of hosts is the land darkened, and the people shall be as the fuel of the fire."
The Ascension of Isaiah, a pseudepigraphical Christian text dated to sometime between the end of the 1st century and the beginning of the 3rd, gives a detailed story of Isaiah confronting an evil false prophet and ending with Isaiah being martyred – none of which is attested in the original Biblical account.
Gregory of Nyssa (c. 335–395) believed that the Prophet Isaiah "knew more perfectly than all others the mystery of the religion of the Gospel". Jerome (c. 342–420) also lauds the Prophet Isaiah, saying "He was more of an Evangelist than a Prophet, because he described all of the Mysteries of the Church of Christ so vividly that you would assume he was not prophesying about the future, but rather was composing a history of past events." Of specific note are the songs of the Suffering Servant, which Christians say are a direct prophetic revelation of the nature, purpose, and detail of the death of Jesus Christ.
The Book of Isaiah is quoted many times by New Testament writers. The Gospel of John says that Isaiah "saw Jesus' glory and spoke about him."
The Eastern Orthodox Church celebrates Saint Isaiah the Prophet with Saint Christopher on May 9. Isaiah is also listed on the page of saints for May 9 in the Roman martyrology of the Roman Catholic Church.
The Book of Mormon quotes Jesus Christ as stating that "great are the words of Isaiah", and that all things prophesied by Isaiah have been and will be fulfilled. The Book of Mormon and Doctrine and Covenants also quote Isaiah more than any other prophet from the Old Testament. Additionally, members of the Church of Jesus Christ of Latter-day Saints consider the founding of the church by Joseph Smith in the 19th century to be a fulfillment of Isaiah 11, the translation of the Book of Mormon to be a fulfillment of Isaiah 29, and the building of Latter-day Saint temples as a fulfillment of Isaiah 2:2.
Isaiah (Arabic: إِشَعْيَاء, romanized: Ishaʿyāʾ) is not mentioned by name in the Quran or the Hadith, but appears frequently as a prophet in Muslim sources such as the qiṣaṣ al-anbiyāʾ and various tafsirs. Al-Tabari (310/923) provides the typical accounts for Islamic traditions regarding Isaiah. He is listed among the prophets in the book of salawat Dalail al-Khayrat. He is further mentioned and accepted as a prophet by other Islamic scholars such as ibn Kathir, Abu Ishaq al-Tha'labi and al-Kisa'i and also modern scholars such as Muhammad Asad and Abdullah Yusuf Ali.
According to Muslim scholars, Isaiah prophesied the coming of Jesus and Muhammad, although the reference to Muhammad is disputed by other religious scholars. Isaiah's narrative in Islamic literature can be divided into three sections. The first establishes Isaiah as a prophet of Judea during the reign of Hezekiah; the second relates Isaiah's actions during the siege of Jerusalem in 597 BC by Sennacherib; and the third warns the nation of coming doom. Paralleling the Hebrew Bible, Islamic tradition states that Hezekiah was king in Jerusalem during Isaiah's time. Hezekiah heard and obeyed Isaiah's advice, but could not quell the turbulence in Israel. This tradition maintains that Hezekiah was a righteous man and that the turbulence worsened after him. After the death of the king, Isaiah told the people not to forsake God, and warned Israel to cease from its persistent sin and disobedience. Muslim tradition maintains that the unrighteous of Judea in their anger sought to kill Isaiah.
In a death that resembles that attributed to Isaiah in Lives of the Prophets, Muslim exegesis recounts that Isaiah was martyred by Israelites by being sawn in two.
In the courts of al-Ma'mun, the seventh Abbasid caliph, Ali al-Ridha, the great-grandson of Muhammad and prominent scholar of his era, was questioned by the Exilarch to prove through the Torah that both Jesus and Muhammad were prophets. Among his several proofs, al-Ridha references the Book of Isaiah, stating "Sha‘ya (Isaiah), the Prophet, said in the Torah concerning what you and your companions say ‘I have seen two riders to whom (He) illuminated earth. One of them was on a donkey and the other was on a camel. Who is the rider of the donkey, and who is the rider of the camel?'" The Exilarch was unable to answer with certainty. Al-Ridha goes on to state that "As for the rider of the donkey, he is ‘Isa (Jesus); and as for the rider of the camel, he is Muhammad, may Allah bless him and his family. Do you deny that this (statement) is in the Torah?" The Rabbi responds "No, I do not deny it."
Allusions in Jewish rabbinic literature to Isaiah contain various expansions, elaborations and inferences that go beyond what is presented in the text of the Bible.
According to the ancient rabbis, Isaiah was a descendant of Judah and Tamar, and his father Amoz was the brother of King Amaziah.
While Isaiah, says the Midrash, was walking up and down in his study he heard God saying "Whom shall I send?" Then Isaiah said "Here am I; send me!" Thereupon God said to him," My children are troublesome and sensitive; if you are ready to be insulted and even beaten by them, you may accept My message; if not, you would better renounce it". Isaiah accepted the mission, and was the most forbearing, as well as the most patriotic, among the prophets, always defending Israel and imploring forgiveness for its sins. When Isaiah said "I dwell in the midst of a people of unclean lips", he was rebuked by God for speaking in such terms of His people.
It is related in the Talmud that Rabbi Simeon ben Azzai found in Jerusalem an account wherein it was written that King Manasseh killed Isaiah. King Manasseh said to Isaiah "Moses, your master, said 'No man may see God and live'; but you have said 'I saw the Lord seated upon his throne'"; and went on to point out other contradictions—as between Deuteronomy and Isaiah 40; between Exodus 33 and 2 Kings Isaiah thought: "I know that he will not accept my explanations; why should I increase his guilt?" He then uttered the tetragrammaton, a cedar-tree opened, and Isaiah disappeared within it. King Manasseh ordered the cedar to be sawn asunder, and when the saw reached his mouth Isaiah died; thus was he punished for having said "I dwell in the midst of a people of unclean lips".
A somewhat different version of this legend is given in the Jerusalem Talmud. According to that version Isaiah, fearing King Manasseh, hid himself in a cedar-tree, but his presence was betrayed by the fringes of his garment, and King Manasseh caused the tree to be sawn in half. A passage of the Targum to Isaiah quoted by Jolowicz states that when Isaiah fled from his pursuers and took refuge in the tree, and the tree was sawn in half, the prophet's blood spurted forth. The legend of Isaiah's martyrdom spread to the Arabs and to the Christians as, for example, Athanasius the bishop of Alexandria (c. 318) wrote, "Isaiah was sawn asunder".
According to rabbinic literature, Isaiah was the maternal grandfather of Manasseh of Judah.
In February 2018, archaeologist Eilat Mazar announced that she and her team had discovered a small seal impression which reads "[belonging] to Isaiah nvy" (could be reconstructed and read as "[belonging] to Isaiah the prophet") during the Ophel excavations, just south of the Temple Mount in Jerusalem. The tiny bulla was found "only 10 feet away" from where an intact bulla bearing the inscription "[belonging] to King Hezekiah of Judah" was discovered in 2015 by the same team. Although the name "Isaiah" in the Paleo-Hebrew alphabet is unmistakable, the damage on the bottom left part of the seal causes difficulties in confirming the word "prophet" or a name "Navi", casting some doubts whether this seal really belongs to the prophet Isaiah. | [
{
"paragraph_id": 0,
"text": "Isaiah (UK: /aɪˈzaɪ.ə/ or US: /aɪˈzeɪ.ə/; Hebrew: יְשַׁעְיָהוּ, Yəšaʿyāhū, \"Yahweh is Salvation\"; also known as Isaias or Esaias from Greek: Ἠσαΐας) was the 8th-century BC Israelite prophet after whom the Book of Isaiah is named.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Within the text of the Book of Isaiah, Isaiah is referred to as \"the prophet\", but the exact relationship between the Book of Isaiah and the actual prophet Isaiah is complicated. The traditional view is that all 66 chapters of the book of Isaiah were written by one man, Isaiah, possibly in two periods between 740 BC and c. 686 BC, separated by approximately 15 years.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Another widely held view is that parts of the first half of the book (chapters 1–39) originated with the historical prophet, interspersed with prose commentaries written in the time of King Josiah 100 years later, and that the remainder of the book dates from immediately before and immediately after the end of the exile in Babylon, almost two centuries after the time of the historical prophet, and perhaps these later chapters represent the work of an ongoing school of prophets who prophesied in accordance with his prophecies.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The first verse of the Book of Isaiah states that Isaiah prophesied during the reigns of Uzziah (or Azariah), Jotham, Ahaz, and Hezekiah, the kings of Judah. Uzziah's reign was 52 years in the middle of the 8th century BC, and Isaiah must have begun his ministry a few years before Uzziah's death, probably in the 740s BC. He may have been contemporary for some years with Manasseh. Thus, Isaiah may have prophesied for as long as 64 years.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "According to some modern interpretations, Isaiah's wife was called \"the prophetess\", either because she was endowed with the prophetic gift, like Deborah and Huldah, or simply because she was the \"wife of the prophet\". They had two sons, naming the elder Shear-Jashub, meaning \"A remnant shall return\", and the younger Maher-Shalal-Hash-Baz, meaning, \"Quickly to spoils, plunder speedily.\"",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Soon after this, Shalmaneser V determined to subdue the northern Kingdom of Israel, taking over and destroying Samaria and beginning the Assyrian captivity. So long as Ahaz reigned, the kingdom of Judah was untouched by the Assyrian power. But when Hezekiah gained the throne, he was encouraged to rebel \"against the king of Assyria\", and entered into an alliance with the king of Egypt. The king of Assyria threatened the king of Judah, and at length invaded the land. Sennacherib's campaign in the Levant brought his powerful army into Judah. Hezekiah was reduced to despair, and submitted to the Assyrians. But after a brief interval, war broke out again. Again Sennacherib led an army into Judah, one detachment of which threatened Jerusalem. Isaiah on that occasion encouraged Hezekiah to resist the Assyrians, whereupon Sennacherib sent a threatening letter to Hezekiah, which he \"spread before the LORD\".",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Then Isaiah son of Amoz sent this message to Hezekiah: “Thus said GOD, the God of Israel, to whom you have prayed, concerning King Sennacherib of Assyria—",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "this is the word that GOD has spoken concerning him: Fair Maiden Zion despises you, She mocks at you; Fair Jerusalem shakes Her head at you. Whom have you blasphemed and reviled? Against whom made loud your voice And haughtily raised your eyes?",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "Against the Holy One of Israel!",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "According to the account in 2 Kings 19 (and its derivative account in 2 Chronicles 32) an angel of God fell on the Assyrian army and 185,000 of its men were killed in one night. \"Like Xerxes in Greece, Sennacherib never recovered from the shock of the disaster in Judah. He made no more expeditions against either Southern Palestine or Egypt.\"",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "The remaining years of Hezekiah's reign were peaceful. Isaiah probably lived to its close, and possibly into the reign of Manasseh. The time and manner of his death are not specified in either the Bible or other primary sources. The Talmud says that he suffered martyrdom by being sawn in two under the orders of Manasseh.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "The book of Isaiah, along with the book of Jeremiah, is distinctive in the Hebrew bible for its direct portrayal of the \"wrath of the LORD\" as presented, for example, in Isaiah 9:19 stating \"Through the wrath of the LORD of hosts is the land darkened, and the people shall be as the fuel of the fire.\"",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "The Ascension of Isaiah, a pseudepigraphical Christian text dated to sometime between the end of the 1st century and the beginning of the 3rd, gives a detailed story of Isaiah confronting an evil false prophet and ending with Isaiah being martyred – none of which is attested in the original Biblical account.",
"title": "In Christianity"
},
{
"paragraph_id": 13,
"text": "Gregory of Nyssa (c. 335–395) believed that the Prophet Isaiah \"knew more perfectly than all others the mystery of the religion of the Gospel\". Jerome (c. 342–420) also lauds the Prophet Isaiah, saying \"He was more of an Evangelist than a Prophet, because he described all of the Mysteries of the Church of Christ so vividly that you would assume he was not prophesying about the future, but rather was composing a history of past events.\" Of specific note are the songs of the Suffering Servant, which Christians say are a direct prophetic revelation of the nature, purpose, and detail of the death of Jesus Christ.",
"title": "In Christianity"
},
{
"paragraph_id": 14,
"text": "The Book of Isaiah is quoted many times by New Testament writers. The Gospel of John says that Isaiah \"saw Jesus' glory and spoke about him.\"",
"title": "In Christianity"
},
{
"paragraph_id": 15,
"text": "The Eastern Orthodox Church celebrates Saint Isaiah the Prophet with Saint Christopher on May 9. Isaiah is also listed on the page of saints for May 9 in the Roman martyrology of the Roman Catholic Church.",
"title": "In Christianity"
},
{
"paragraph_id": 16,
"text": "The Book of Mormon quotes Jesus Christ as stating that \"great are the words of Isaiah\", and that all things prophesied by Isaiah have been and will be fulfilled. The Book of Mormon and Doctrine and Covenants also quote Isaiah more than any other prophet from the Old Testament. Additionally, members of the Church of Jesus Christ of Latter-day Saints consider the founding of the church by Joseph Smith in the 19th century to be a fulfillment of Isaiah 11, the translation of the Book of Mormon to be a fulfillment of Isaiah 29, and the building of Latter-day Saint temples as a fulfillment of Isaiah 2:2.",
"title": "In Christianity"
},
{
"paragraph_id": 17,
"text": "Isaiah (Arabic: إِشَعْيَاء, romanized: Ishaʿyāʾ) is not mentioned by name in the Quran or the Hadith, but appears frequently as a prophet in Muslim sources such as the qiṣaṣ al-anbiyāʾ and various tafsirs. Al-Tabari (310/923) provides the typical accounts for Islamic traditions regarding Isaiah. He is listed among the prophets in the book of salawat Dalail al-Khayrat. He is further mentioned and accepted as a prophet by other Islamic scholars such as ibn Kathir, Abu Ishaq al-Tha'labi and al-Kisa'i and also modern scholars such as Muhammad Asad and Abdullah Yusuf Ali.",
"title": "In Islam"
},
{
"paragraph_id": 18,
"text": "According to Muslim scholars, Isaiah prophesied the coming of Jesus and Muhammad, although the reference to Muhammad is disputed by other religious scholars. Isaiah's narrative in Islamic literature can be divided into three sections. The first establishes Isaiah as a prophet of Judea during the reign of Hezekiah; the second relates Isaiah's actions during the siege of Jerusalem in 597 BC by Sennacherib; and the third warns the nation of coming doom. Paralleling the Hebrew Bible, Islamic tradition states that Hezekiah was king in Jerusalem during Isaiah's time. Hezekiah heard and obeyed Isaiah's advice, but could not quell the turbulence in Israel. This tradition maintains that Hezekiah was a righteous man and that the turbulence worsened after him. After the death of the king, Isaiah told the people not to forsake God, and warned Israel to cease from its persistent sin and disobedience. Muslim tradition maintains that the unrighteous of Judea in their anger sought to kill Isaiah.",
"title": "In Islam"
},
{
"paragraph_id": 19,
"text": "In a death that resembles that attributed to Isaiah in Lives of the Prophets, Muslim exegesis recounts that Isaiah was martyred by Israelites by being sawn in two.",
"title": "In Islam"
},
{
"paragraph_id": 20,
"text": "In the courts of al-Ma'mun, the seventh Abbasid caliph, Ali al-Ridha, the great-grandson of Muhammad and prominent scholar of his era, was questioned by the Exilarch to prove through the Torah that both Jesus and Muhammad were prophets. Among his several proofs, al-Ridha references the Book of Isaiah, stating \"Sha‘ya (Isaiah), the Prophet, said in the Torah concerning what you and your companions say ‘I have seen two riders to whom (He) illuminated earth. One of them was on a donkey and the other was on a camel. Who is the rider of the donkey, and who is the rider of the camel?'\" The Exilarch was unable to answer with certainty. Al-Ridha goes on to state that \"As for the rider of the donkey, he is ‘Isa (Jesus); and as for the rider of the camel, he is Muhammad, may Allah bless him and his family. Do you deny that this (statement) is in the Torah?\" The Rabbi responds \"No, I do not deny it.\"",
"title": "In Islam"
},
{
"paragraph_id": 21,
"text": "Allusions in Jewish rabbinic literature to Isaiah contain various expansions, elaborations and inferences that go beyond what is presented in the text of the Bible.",
"title": "In rabbinic literature"
},
{
"paragraph_id": 22,
"text": "According to the ancient rabbis, Isaiah was a descendant of Judah and Tamar, and his father Amoz was the brother of King Amaziah.",
"title": "In rabbinic literature"
},
{
"paragraph_id": 23,
"text": "While Isaiah, says the Midrash, was walking up and down in his study he heard God saying \"Whom shall I send?\" Then Isaiah said \"Here am I; send me!\" Thereupon God said to him,\" My children are troublesome and sensitive; if you are ready to be insulted and even beaten by them, you may accept My message; if not, you would better renounce it\". Isaiah accepted the mission, and was the most forbearing, as well as the most patriotic, among the prophets, always defending Israel and imploring forgiveness for its sins. When Isaiah said \"I dwell in the midst of a people of unclean lips\", he was rebuked by God for speaking in such terms of His people.",
"title": "In rabbinic literature"
},
{
"paragraph_id": 24,
"text": "It is related in the Talmud that Rabbi Simeon ben Azzai found in Jerusalem an account wherein it was written that King Manasseh killed Isaiah. King Manasseh said to Isaiah \"Moses, your master, said 'No man may see God and live'; but you have said 'I saw the Lord seated upon his throne'\"; and went on to point out other contradictions—as between Deuteronomy and Isaiah 40; between Exodus 33 and 2 Kings Isaiah thought: \"I know that he will not accept my explanations; why should I increase his guilt?\" He then uttered the tetragrammaton, a cedar-tree opened, and Isaiah disappeared within it. King Manasseh ordered the cedar to be sawn asunder, and when the saw reached his mouth Isaiah died; thus was he punished for having said \"I dwell in the midst of a people of unclean lips\".",
"title": "In rabbinic literature"
},
{
"paragraph_id": 25,
"text": "A somewhat different version of this legend is given in the Jerusalem Talmud. According to that version Isaiah, fearing King Manasseh, hid himself in a cedar-tree, but his presence was betrayed by the fringes of his garment, and King Manasseh caused the tree to be sawn in half. A passage of the Targum to Isaiah quoted by Jolowicz states that when Isaiah fled from his pursuers and took refuge in the tree, and the tree was sawn in half, the prophet's blood spurted forth. The legend of Isaiah's martyrdom spread to the Arabs and to the Christians as, for example, Athanasius the bishop of Alexandria (c. 318) wrote, \"Isaiah was sawn asunder\".",
"title": "In rabbinic literature"
},
{
"paragraph_id": 26,
"text": "According to rabbinic literature, Isaiah was the maternal grandfather of Manasseh of Judah.",
"title": "In rabbinic literature"
},
{
"paragraph_id": 27,
"text": "In February 2018, archaeologist Eilat Mazar announced that she and her team had discovered a small seal impression which reads \"[belonging] to Isaiah nvy\" (could be reconstructed and read as \"[belonging] to Isaiah the prophet\") during the Ophel excavations, just south of the Temple Mount in Jerusalem. The tiny bulla was found \"only 10 feet away\" from where an intact bulla bearing the inscription \"[belonging] to King Hezekiah of Judah\" was discovered in 2015 by the same team. Although the name \"Isaiah\" in the Paleo-Hebrew alphabet is unmistakable, the damage on the bottom left part of the seal causes difficulties in confirming the word \"prophet\" or a name \"Navi\", casting some doubts whether this seal really belongs to the prophet Isaiah.",
"title": "Archaeology"
}
]
| Isaiah was the 8th-century BC Israelite prophet after whom the Book of Isaiah is named. Within the text of the Book of Isaiah, Isaiah is referred to as "the prophet", but the exact relationship between the Book of Isaiah and the actual prophet Isaiah is complicated. The traditional view is that all 66 chapters of the book of Isaiah were written by one man, Isaiah, possibly in two periods between 740 BC and c. 686 BC, separated by approximately 15 years. Another widely held view is that parts of the first half of the book originated with the historical prophet, interspersed with prose commentaries written in the time of King Josiah 100 years later, and that the remainder of the book dates from immediately before and immediately after the end of the exile in Babylon, almost two centuries after the time of the historical prophet, and perhaps these later chapters represent the work of an ongoing school of prophets who prophesied in accordance with his prophecies. | 2001-09-29T06:50:48Z | 2023-12-30T20:59:34Z | [
"Template:Lang-ar",
"Template:Circa",
"Template:Notelist",
"Template:Bibleref",
"Template:Wikisource-inline",
"Template:Commons category-inline",
"Template:Short description",
"Template:Eastons",
"Template:Authority control",
"Template:Efn",
"Template:Cite web",
"Template:Bibleverse",
"Template:Lang-he",
"Template:LORD",
"Template:Quote",
"Template:Wikiquote-inline",
"Template:Prophets of the Tanakh",
"Template:Muslim saints",
"Template:Catholic saints",
"Template:About",
"Template:Infobox saint",
"Template:Redirect-multi",
"Template:Cite book",
"Template:ISBN",
"Template:Book of Isaiah",
"Template:Reflist",
"Template:Lang-el",
"Template:Bibleverse-nb",
"Template:JewishEncyclopedia",
"Template:IPAc-en"
]
| https://en.wikipedia.org/wiki/Isaiah |
15,095 | Intifada | An intifada (Arabic: انتفاضة intifāḍah) is a rebellion or uprising, or a resistance movement. It is a key concept in contemporary Arabic usage referring to a uprising against oppression.
Intifada is an Arabic word literally meaning, as a noun, "tremor", "shivering", "shuddering". It is derived from an Arabic term nafada meaning "to shake", "shake off", "get rid of", as a dog might shrug off water, or as one might shake off sleep, or dirt from one's sandals.
The concept of intifada was first used in modern times in 1952 within the Kingdom of Iraq, when socialist and communist parties took to the streets to protest the Hashemite monarchy, with inspiration of the 1952 Egyptian Revolution.
The concept was adopted in Western Sahara, with the gradual withdrawal of Spanish forces in the 1970s as the Zemla Intifada, but was essentially rooted into the Western Sahara conflict with the First Sahrawi Intifada – protests by Sahrawi activists in the Western Saharan Southern Provinces (1999–2004), Second Sahrawi Intifada or Independence Intifada and finally the Gdeim Izik protest camp in 2011.
In the Palestinian context, the word refers to attempts to "shake off" the Israeli occupation of the West Bank and Gaza Strip in the First and Second Intifadas, where it was originally chosen to connote "aggressive nonviolent resistance", a meaning it bore among Palestinian students in struggles in the 1980s and which they adopted as less confrontational than terms in earlier militant rhetoric since it bore no nuance of violence. The First Intifada was characterized by protests and violent riots, especially stone-throwing, while the Second Intifada was characterized by a period of heightened violence. The suicide bombings carried out by Palestinian assailants became one of the more prominent features of the Second Intifada and mainly targeted Israeli civilians, contrasting with the relatively less violent nature of the First Intifada.
The phrase "Globalize the Intifada" is a slogan that promotes worldwide activism in solidarity with the Palestinian resistance. This slogan is composed of "Intifada" which denotes the Palestinian uprisings against Israeli control. "Globalize" calls for an expansion of these uprisings from a regional scope to a global movement.
The chant and its associated chants have caused controversy, particularly concerning their impact and connotations. Critics, particularly from Jewish groups, have condemned the slogan for encouraging widespread violence or terrorism. Some interpretations view it as a rallying call to harm Jews.
Intifada may refer to these events: | [
{
"paragraph_id": 0,
"text": "An intifada (Arabic: انتفاضة intifāḍah) is a rebellion or uprising, or a resistance movement. It is a key concept in contemporary Arabic usage referring to a uprising against oppression.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Intifada is an Arabic word literally meaning, as a noun, \"tremor\", \"shivering\", \"shuddering\". It is derived from an Arabic term nafada meaning \"to shake\", \"shake off\", \"get rid of\", as a dog might shrug off water, or as one might shake off sleep, or dirt from one's sandals.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "The concept of intifada was first used in modern times in 1952 within the Kingdom of Iraq, when socialist and communist parties took to the streets to protest the Hashemite monarchy, with inspiration of the 1952 Egyptian Revolution.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The concept was adopted in Western Sahara, with the gradual withdrawal of Spanish forces in the 1970s as the Zemla Intifada, but was essentially rooted into the Western Sahara conflict with the First Sahrawi Intifada – protests by Sahrawi activists in the Western Saharan Southern Provinces (1999–2004), Second Sahrawi Intifada or Independence Intifada and finally the Gdeim Izik protest camp in 2011.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In the Palestinian context, the word refers to attempts to \"shake off\" the Israeli occupation of the West Bank and Gaza Strip in the First and Second Intifadas, where it was originally chosen to connote \"aggressive nonviolent resistance\", a meaning it bore among Palestinian students in struggles in the 1980s and which they adopted as less confrontational than terms in earlier militant rhetoric since it bore no nuance of violence. The First Intifada was characterized by protests and violent riots, especially stone-throwing, while the Second Intifada was characterized by a period of heightened violence. The suicide bombings carried out by Palestinian assailants became one of the more prominent features of the Second Intifada and mainly targeted Israeli civilians, contrasting with the relatively less violent nature of the First Intifada.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The phrase \"Globalize the Intifada\" is a slogan that promotes worldwide activism in solidarity with the Palestinian resistance. This slogan is composed of \"Intifada\" which denotes the Palestinian uprisings against Israeli control. \"Globalize\" calls for an expansion of these uprisings from a regional scope to a global movement.",
"title": "Globalize the Intifada"
},
{
"paragraph_id": 6,
"text": "The chant and its associated chants have caused controversy, particularly concerning their impact and connotations. Critics, particularly from Jewish groups, have condemned the slogan for encouraging widespread violence or terrorism. Some interpretations view it as a rallying call to harm Jews.",
"title": "Globalize the Intifada"
},
{
"paragraph_id": 7,
"text": "Intifada may refer to these events:",
"title": "List of events named Intifada"
}
]
| An intifada is a rebellion or uprising, or a resistance movement. It is a key concept in contemporary Arabic usage referring to a uprising against oppression. | 2001-10-01T04:54:59Z | 2023-12-28T16:39:55Z | [
"Template:Transl",
"Template:Reflist",
"Template:Cite web",
"Template:Webarchive",
"Template:Wiktionary",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Pp-30-500",
"Template:Lang-ar",
"Template:Further",
"Template:Main",
"Template:Cite news",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Intifada |
15,097 | Ionosphere | The ionosphere (/aɪˈɒnəˌsfɪər/) is the ionized part of the upper atmosphere of Earth, from about 48 km (30 mi) to 965 km (600 mi) above sea level, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on Earth. It also affects GPS signals that travel through this layer.
As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a 152.4 m (500 ft) kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later.
In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties.
In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923.
In 1926, Scottish physicist Robert Watson-Watt introduced the term ionosphere in a letter published only in 1969 in Nature:
We have in quite recent years seen the universal adoption of the term 'stratosphere'..and..the companion term 'troposphere'... The term 'ionosphere', for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series.
In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect.
Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere.
In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere.
On July 26, 1963 the first operational geosynchronous satellite Syncom 2 was launched. On board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica.
The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about 50 km (30 mi) to more than 1,000 km (600 mi). It exists primarily due to ultraviolet radiation from the Sun.
The lowest part of the Earth's atmosphere, the troposphere extends from the surface to about 10 km (6 mi). Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above 80 km (50 mi), in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially ionized and contains a plasma which is referred to as the ionosphere.
Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present.
Ionization depends primarily on the Sun and its Extreme Ultraviolet (EUV) and X-ray irradiance which varies strongly with solar activity. The more magnetically active the Sun is, the more sunspot active regions there are on the Sun at any one time. Sunspot active regions are the source of increased coronal heating and accompanying increases in EUV and X-ray irradiance, particularly during episodic magnetic eruptions that include solar flares that increase ionization on the sunlit side of the Earth and solar energetic particle events that can increase ionization in the polar regions. Thus the degree of ionization in the ionosphere follows both a diurnal (time of day) cycle and the 11-year solar cycle. There is also a seasonal dependence in ionization degree since the local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization.
Sydney Chapman proposed that the region below the ionosphere be called neutrosphere (the neutral atmosphere).
At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F1 layer. The F2 layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves.
The D layer is the innermost layer, 48 km (30 mi) to 90 km (56 mi) above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, solar flares can generate hard X-rays (wavelength < 1 nm) that ionize N2 and O2. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions.
Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime.
During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours.
The E layer is the middle layer, 90 km (56 mi) to 150 km (93 mi) above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O2). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the Es layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer.
This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). In 1924 its existence was detected by Edward V. Appleton and Miles Barnett.
The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, frequently up to 50 MHz and rarely up to 450 MHz. Sporadic-E events may last for just a few minutes to many hours. Sporadic E propagation makes VHF-operating by radio amateurs very exciting when long distance propagation paths that are generally unreachable "open up" to two-way communication. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs every day during June and July in northern hemisphere mid-latitudes when high signal levels are often reached. The skip distances are generally around 1,640 km (1,020 mi). Distances for one hop propagation can be anywhere from 900 km (560 mi) to 2,500 km (1,600 mi). Multi-hop propagation over 3,500 km (2,200 mi) is also common, sometimes to distances of 15,000 km (9,300 mi) or more.
The F layer or region, also known as the Appleton–Barnett layer, extends from about 150 km (93 mi) to more than 500 km (310 mi) above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F2) at night, but during the day, a secondary peak (labelled F1) often forms in the electron density profile. Because the F2 layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distance high frequency (HF, or shortwave) radio communications.
Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere.
From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region.
An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density.
Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457).
Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions.
At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity.
Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain.
The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) (100–130 km (60–80 mi) altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet.
When the Sun is active, strong solar flares can occur that hit the sunlit side of Earth with hard X-rays. The X-rays penetrate to the D-region, releasing electrons that rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout that can persist for many hours after strong flares. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out steadily declines as the electrons in the D-region recombine rapidly and propagation gradually returns to pre-flare conditions over minutes to hours depending on the solar flare strength and frequency.
Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions.
Geomagnetic storms and ionospheric storms are temporary and intense disturbances of the Earth's magnetosphere and ionosphere.
During a geomagnetic storm the F₂ layer will become unstable, fragment, and may even disappear completely. In the Northern and Southern polar regions of the Earth aurorae will be observable in the night sky.
Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events.
Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast.
In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur.
Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Shortwave broadcasting is useful in crossing international boundaries and covering large areas at low cost. Automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts and to assist with emergency communications during natural disasters. Armed forces use shortwave so as to be independent of vulnerable infrastructure, including satellites, and the low latency of shortwave communications make it attractive to stock traders, where milliseconds count.
When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough.
A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics).
The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below:
where N = electron density per m and fcritical is in Hz.
The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time.
where α {\displaystyle \alpha } = angle of arrival, the angle of the wave relative to the horizon, and sin is the sine function.
The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer.
There are a number of models used to understand the effects of the ionosphere on global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model.
The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction.
Scientists explore the structure of the ionosphere by a wide variety of methods. They include:
A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska.
The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 countries and multiple radars in both hemispheres.
Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo Telescope located in Puerto Rico, was originally intended to study Earth's ionosphere.
Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available).
Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities.
Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed.
Major GNSS radio occultation missions include the GRACE, CHAMP, and COSMIC.
In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere.
F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. The two indices have been shown to be correlated with each other.
However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere.
Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.
The atmosphere of Titan includes an ionosphere that ranges from about 880 km (550 mi) to 1,300 km (810 mi) in altitude and contains carbon compounds. Ionospheres have also been observed at Io, Europa, Ganymede, and Triton. | [
{
"paragraph_id": 0,
"text": "The ionosphere (/aɪˈɒnəˌsfɪər/) is the ionized part of the upper atmosphere of Earth, from about 48 km (30 mi) to 965 km (600 mi) above sea level, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on Earth. It also affects GPS signals that travel through this layer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a 152.4 m (500 ft) kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later.",
"title": "History of discovery"
},
{
"paragraph_id": 2,
"text": "In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties.",
"title": "History of discovery"
},
{
"paragraph_id": 3,
"text": "In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). The government thought those frequencies were useless. This led to the discovery of HF radio propagation via the ionosphere in 1923.",
"title": "History of discovery"
},
{
"paragraph_id": 4,
"text": "In 1926, Scottish physicist Robert Watson-Watt introduced the term ionosphere in a letter published only in 1969 in Nature:",
"title": "History of discovery"
},
{
"paragraph_id": 5,
"text": "We have in quite recent years seen the universal adoption of the term 'stratosphere'..and..the companion term 'troposphere'... The term 'ionosphere', for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series.",
"title": "History of discovery"
},
{
"paragraph_id": 6,
"text": "In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect.",
"title": "History of discovery"
},
{
"paragraph_id": 7,
"text": "Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere.",
"title": "History of discovery"
},
{
"paragraph_id": 8,
"text": "In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere.",
"title": "History of discovery"
},
{
"paragraph_id": 9,
"text": "On July 26, 1963 the first operational geosynchronous satellite Syncom 2 was launched. On board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica.",
"title": "History of discovery"
},
{
"paragraph_id": 10,
"text": "The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about 50 km (30 mi) to more than 1,000 km (600 mi). It exists primarily due to ultraviolet radiation from the Sun.",
"title": "Geophysics"
},
{
"paragraph_id": 11,
"text": "The lowest part of the Earth's atmosphere, the troposphere extends from the surface to about 10 km (6 mi). Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above 80 km (50 mi), in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially ionized and contains a plasma which is referred to as the ionosphere.",
"title": "Geophysics"
},
{
"paragraph_id": 12,
"text": "Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is \"captured\" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present.",
"title": "Geophysics"
},
{
"paragraph_id": 13,
"text": "Ionization depends primarily on the Sun and its Extreme Ultraviolet (EUV) and X-ray irradiance which varies strongly with solar activity. The more magnetically active the Sun is, the more sunspot active regions there are on the Sun at any one time. Sunspot active regions are the source of increased coronal heating and accompanying increases in EUV and X-ray irradiance, particularly during episodic magnetic eruptions that include solar flares that increase ionization on the sunlit side of the Earth and solar energetic particle events that can increase ionization in the polar regions. Thus the degree of ionization in the ionosphere follows both a diurnal (time of day) cycle and the 11-year solar cycle. There is also a seasonal dependence in ionization degree since the local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization.",
"title": "Geophysics"
},
{
"paragraph_id": 14,
"text": "Sydney Chapman proposed that the region below the ionosphere be called neutrosphere (the neutral atmosphere).",
"title": "Geophysics"
},
{
"paragraph_id": 15,
"text": "",
"title": "Geophysics"
},
{
"paragraph_id": 16,
"text": "At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F1 layer. The F2 layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves.",
"title": "Layers of ionization"
},
{
"paragraph_id": 17,
"text": "The D layer is the innermost layer, 48 km (30 mi) to 90 km (56 mi) above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, solar flares can generate hard X-rays (wavelength < 1 nm) that ionize N2 and O2. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions.",
"title": "Layers of ionization"
},
{
"paragraph_id": 18,
"text": "Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime.",
"title": "Layers of ionization"
},
{
"paragraph_id": 19,
"text": "During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours.",
"title": "Layers of ionization"
},
{
"paragraph_id": 20,
"text": "The E layer is the middle layer, 90 km (56 mi) to 150 km (93 mi) above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O2). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the Es layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer.",
"title": "Layers of ionization"
},
{
"paragraph_id": 21,
"text": "This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). In 1924 its existence was detected by Edward V. Appleton and Miles Barnett.",
"title": "Layers of ionization"
},
{
"paragraph_id": 22,
"text": "The Es layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, frequently up to 50 MHz and rarely up to 450 MHz. Sporadic-E events may last for just a few minutes to many hours. Sporadic E propagation makes VHF-operating by radio amateurs very exciting when long distance propagation paths that are generally unreachable \"open up\" to two-way communication. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs every day during June and July in northern hemisphere mid-latitudes when high signal levels are often reached. The skip distances are generally around 1,640 km (1,020 mi). Distances for one hop propagation can be anywhere from 900 km (560 mi) to 2,500 km (1,600 mi). Multi-hop propagation over 3,500 km (2,200 mi) is also common, sometimes to distances of 15,000 km (9,300 mi) or more.",
"title": "Layers of ionization"
},
{
"paragraph_id": 23,
"text": "The F layer or region, also known as the Appleton–Barnett layer, extends from about 150 km (93 mi) to more than 500 km (310 mi) above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F2) at night, but during the day, a secondary peak (labelled F1) often forms in the electron density profile. Because the F2 layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distance high frequency (HF, or shortwave) radio communications.",
"title": "Layers of ionization"
},
{
"paragraph_id": 24,
"text": "Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere.",
"title": "Layers of ionization"
},
{
"paragraph_id": 25,
"text": "From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region.",
"title": "Layers of ionization"
},
{
"paragraph_id": 26,
"text": "An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: electron density, electron and ion temperature and, since several species of ions are present, ionic composition. Radio propagation depends uniquely on electron density.",
"title": "Ionospheric model"
},
{
"paragraph_id": 27,
"text": "Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is \"International Standard\" for the terrestrial ionosphere (standard TS16457).",
"title": "Ionospheric model"
},
{
"paragraph_id": 28,
"text": "Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions.",
"title": "Persistent anomalies to the idealized model"
},
{
"paragraph_id": 29,
"text": "At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity.",
"title": "Persistent anomalies to the idealized model"
},
{
"paragraph_id": 30,
"text": "Within approximately ± 20 degrees of the magnetic equator, is the equatorial anomaly. It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the equatorial fountain.",
"title": "Persistent anomalies to the idealized model"
},
{
"paragraph_id": 31,
"text": "The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) (100–130 km (60–80 mi) altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet.",
"title": "Persistent anomalies to the idealized model"
},
{
"paragraph_id": 32,
"text": "When the Sun is active, strong solar flares can occur that hit the sunlit side of Earth with hard X-rays. The X-rays penetrate to the D-region, releasing electrons that rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout that can persist for many hours after strong flares. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out steadily declines as the electrons in the D-region recombine rapidly and propagation gradually returns to pre-flare conditions over minutes to hours depending on the solar flare strength and frequency.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 33,
"text": "Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 34,
"text": "Geomagnetic storms and ionospheric storms are temporary and intense disturbances of the Earth's magnetosphere and ionosphere.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 35,
"text": "During a geomagnetic storm the F₂ layer will become unstable, fragment, and may even disappear completely. In the Northern and Southern polar regions of the Earth aurorae will be observable in the night sky.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 36,
"text": "Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called \"whistler\" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called \"lightning-induced electron precipitation\" (LEP) events.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 37,
"text": "Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 38,
"text": "In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur.",
"title": "Ephemeral ionospheric perturbations"
},
{
"paragraph_id": 39,
"text": "Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called \"skip\" or \"skywave\" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Shortwave broadcasting is useful in crossing international boundaries and covering large areas at low cost. Automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts and to assist with emergency communications during natural disasters. Armed forces use shortwave so as to be independent of vulnerable infrastructure, including satellites, and the low latency of shortwave communications make it attractive to stock traders, where milliseconds count.",
"title": "Applications"
},
{
"paragraph_id": 40,
"text": "When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough.",
"title": "Applications"
},
{
"paragraph_id": 41,
"text": "A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic \"ray\" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics).",
"title": "Applications"
},
{
"paragraph_id": 42,
"text": "The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below:",
"title": "Applications"
},
{
"paragraph_id": 43,
"text": "where N = electron density per m and fcritical is in Hz.",
"title": "Applications"
},
{
"paragraph_id": 44,
"text": "The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time.",
"title": "Applications"
},
{
"paragraph_id": 45,
"text": "where α {\\displaystyle \\alpha } = angle of arrival, the angle of the wave relative to the horizon, and sin is the sine function.",
"title": "Applications"
},
{
"paragraph_id": 46,
"text": "The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer.",
"title": "Applications"
},
{
"paragraph_id": 47,
"text": "There are a number of models used to understand the effects of the ionosphere on global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model.",
"title": "Applications"
},
{
"paragraph_id": 48,
"text": "The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction.",
"title": "Applications"
},
{
"paragraph_id": 49,
"text": "Scientists explore the structure of the ionosphere by a wide variety of methods. They include:",
"title": "Measurements"
},
{
"paragraph_id": 50,
"text": "A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska.",
"title": "Measurements"
},
{
"paragraph_id": 51,
"text": "The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 countries and multiple radars in both hemispheres.",
"title": "Measurements"
},
{
"paragraph_id": 52,
"text": "Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo Telescope located in Puerto Rico, was originally intended to study Earth's ionosphere.",
"title": "Measurements"
},
{
"paragraph_id": 53,
"text": "Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: \"URSI Handbook of Ionogram Interpretation and Reduction\", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available).",
"title": "Measurements"
},
{
"paragraph_id": 54,
"text": "Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities.",
"title": "Measurements"
},
{
"paragraph_id": 55,
"text": "Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed.",
"title": "Measurements"
},
{
"paragraph_id": 56,
"text": "Major GNSS radio occultation missions include the GRACE, CHAMP, and COSMIC.",
"title": "Measurements"
},
{
"paragraph_id": 57,
"text": "In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere.",
"title": "Indices of the ionosphere"
},
{
"paragraph_id": 58,
"text": "F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. The two indices have been shown to be correlated with each other.",
"title": "Indices of the ionosphere"
},
{
"paragraph_id": 59,
"text": "However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere.",
"title": "Indices of the ionosphere"
},
{
"paragraph_id": 60,
"text": "Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
"title": "Ionospheres of other planets and natural satellites"
},
{
"paragraph_id": 61,
"text": "The atmosphere of Titan includes an ionosphere that ranges from about 880 km (550 mi) to 1,300 km (810 mi) in altitude and contains carbon compounds. Ionospheres have also been observed at Io, Europa, Ganymede, and Triton.",
"title": "Ionospheres of other planets and natural satellites"
}
]
| The ionosphere is the ionized part of the upper atmosphere of Earth, from about 48 km (30 mi) to 965 km (600 mi) above sea level, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on Earth. It also affects GPS signals that travel through this layer. | 2001-11-06T03:52:41Z | 2023-08-21T16:22:15Z | [
"Template:Wiktionary",
"Template:Cvt",
"Template:Refn",
"Template:Anchor",
"Template:Main",
"Template:Col-float-end",
"Template:IPAc-en",
"Template:Cite book",
"Template:Earth's atmosphere",
"Template:Short description",
"Template:Reflist",
"Template:Portal bar",
"Template:Authority control",
"Template:Nowrap",
"Template:Col-float-break",
"Template:Cite journal",
"Template:Webarchive",
"Template:Quotation",
"Template:Cite encyclopedia",
"Template:Cite web",
"Template:Doi",
"Template:Further",
"Template:Sub",
"Template:Stub section",
"Template:See also",
"Template:Col-float",
"Template:ISBN",
"Template:Commons category",
"Template:Convert"
]
| https://en.wikipedia.org/wiki/Ionosphere |
15,100 | Interlingua | Interlingua (/ɪntərˈlɪŋɡwə/) is an international auxiliary language (IAL) developed between 1937 and 1951 by the American International Auxiliary Language Association (IALA). It is a constructed language of the "naturalistic" variety, whose vocabulary, grammar, and other characteristics are derived from natural languages. Interlingua literature maintains that (written) Interlingua is comprehensible to the hundreds of millions of people who speak Romance languages, though it is actively spoken by only a few hundred.
Interlingua was developed to combine a simple, mostly regular grammar with a vocabulary common to a wide range of western European languages, making it easy to learn for those whose native languages were sources of Interlingua's vocabulary and grammar.
The name Interlingua comes from the Latin words inter, meaning 'between', and lingua, meaning 'tongue' or 'language'. These morphemes are the same in Interlingua; thus, Interlingua would mean 'between language'.
Interlingua focuses on common vocabulary shared by Western European languages, which are often descended from the Latin language (such as the Romance languages) and Greek language. Interlingua organizers have four "primary control languages" where, by default, a word (or variant thereof) is expected to appear in at least three of them to qualify for inclusion in Interlingua. These are English; French; Italian; and a combination of Spanish and Portuguese which are treated as a single mega-language for Interlingua purposes. Additionally, German and Russian have been dubbed "secondary control languages". While the result is often akin to Neo-Latin as the most frequent source of commonality, Interlingua words can have origins in any language, as long as they have drifted into the primary control languages as loanwords. For example, the Japanese words geisha and samurai are used as-is in most Western European languages, and are in Interlingua as well; the same with the likes of Guugu Yimithirr gangurru (Interlingua: kanguru, English: kangaroo) or the Finnish sauna.
The maintainers of Interlingua attempt to keep the grammar simple and word formation regular, and use only a small number of roots and affixes. This is intended to make the language quicker to learn.
The American heiress Alice Vanderbilt Morris (1874–1950) became interested in linguistics and the international auxiliary language movement in the early 1920s. In 1924, Morris and her husband, Dave Hennen Morris, established the non-profit International Auxiliary Language Association (IALA) in New York City. Their aim was to place the study of IALs on a more complex and scientific basis. Morris developed the research program of IALA in consultation with Edward Sapir, William Edward Collinson, and Otto Jespersen.
Investigations of the auxiliary language problem were in progress at the International Research Council, the American Council on Education, the American Council of Learned Societies, the British, French, Italian, and American Associations for the advancement of science, and other groups of specialists. Morris created IALA as a continuation of this work.
The IALA became a major supporter of mainstream American linguistics. Numerous studies by Sapir, Collinson, and Morris Swadesh in the 1930s and 1940s, for example, were funded by IALA. Alice Morris edited several of these studies and provided much of IALA's financial support. For example, Morris herself edited Sapir and Morris Swadesh's 1932 cross-linguistic study of ending-point phenomena, and Collinson's 1937 study of indication. IALA also received support from groups such as the Carnegie Corporation, the Ford Foundation, the Research Corporation, and the Rockefeller Foundation.
In its early years, IALA concerned itself with three tasks: finding other organizations around the world with similar goals; building a library of books about languages and interlinguistics; and comparing extant IALs, including Esperanto, Esperanto II, Ido, Peano's Interlingua (Latino sine flexione), Novial, and Interlingue (Occidental). In pursuit of the last goal, it conducted parallel studies of these languages, with comparative studies of national languages.
At the Second International Interlanguage Congress, held in Geneva in 1931, IALA began to break new ground; 27 recognized linguists signed a testimonial of support for IALA's research program. An additional eight added their signatures at the third congress, convened in Rome in 1933. That same year, Herbert N. Shenton and Edward Thorndike became influential in IALA's work by authoring studies in the interlinguistic field.
The first steps towards the finalization of Interlingua were taken in 1937, when a committee of 24 linguists from 19 universities published Some Criteria for an International Language and Commentary. However, the outbreak of World War II in 1939 cut short the intended biannual meetings of the committee.
Originally, the association had not intended to create its own language. Its goal was to identify which auxiliary language already available was best suited for international communication, and how to promote it more effectively. However, after ten years of research, many members of IALA concluded that none of the existing interlanguages were up to the task. By 1937, the members had made the decision to create a new language, to the surprise of the world's interlanguage community.
To that point, much of the debate had been equivocal on the decision to use naturalistic (e.g., Peano's Interlingua, Novial and Occidental) or systematic (e.g., Esperanto and Ido) words. During the war years, proponents of a naturalistic interlanguage won out. The first support was Thorndike's paper; the second was a concession by proponents of the systematic languages that thousands of words were already present in many, or even a majority, of the European languages. Their argument was that systematic derivation of words was a Procrustean bed, forcing the learner to unlearn and re-memorize a new derivation scheme when a usable vocabulary was already available. IALA from that point assumed the position that a naturalistic language would be best.
IALA's research activities were based in Liverpool, before relocating to New York due to the outbreak of World War II, where E. Clark Stillman established a new research staff. Stillman, with the assistance of Alexander Gode, constructed the methodology for selecting Interlingua vocabulary based on a comparison of control languages.
In 1943 Stillman left for war work and Gode became Acting Director of Research. IALA began to develop models of the proposed language, the first of which were presented in Morris's General Report in 1945.
From 1946 to 1948, French linguist André Martinet was Director of Research. During this period IALA continued to develop models and conducted polling to determine the optimal form of the final language. In 1946, IALA sent an extensive survey to more than 3,000 language teachers and related professionals on three continents.
Model P was unchanged from 1945; Model M was relatively modern in comparison to more classical P. Model K was slightly modified in the direction of Ido. The resulting four models that were canvassed were:
An example sentence:
The vote total ended up as follows: P 26.6%, M 37.5%, C 20%, and K 15%. The two more schematic models, C and K, were rejected. Of the two naturalistic models, M attracted somewhat more support than P. Taking national biases into account (for example, the French who were polled disproportionately favored Model M), IALA decided on a compromise between models M and P, with certain elements of C.
The German-American Gode and the French Martinet did not get along. Martinet resigned and took up a position at Columbia University in 1948, and Gode took on the last phase of Interlingua's development. His task was to combine elements of Model M and Model P; take the flaws seen in both by the polled community and repair them with elements of Model C as needed; and develop a vocabulary. Alice Vanderbilt Morris died in 1950, and the funding that had sustained IALA ceased, but sufficient funds remained to publish a dictionary and grammar. The vocabulary and grammar of Interlingua were first presented in 1951, when IALA published the finalized Interlingua Grammar and the Interlingua–English Dictionary (IED). In 1954, IALA published an introductory manual entitled Interlingua a Prime Vista ("Interlingua at First Sight").
Interlingua as presented by the IALA is very close to Peano's Interlingua (Latino sine flexione), both in its grammar and especially in its vocabulary. A distinct abbreviation was adopted: IA instead of IL.
An early practical application of Interlingua was the scientific newsletter Spectroscopia Molecular, published from 1952 to 1980. In 1954, the Second World Cardiological Congress in Washington, D.C. released summaries of its talks in both English and Interlingua. Within a few years, it found similar use at nine further medical congresses. Between the mid-1950s and the late 1970s, some thirty scientific and medical journals provided article summaries in Interlingua. Gode wrote a monthly column in Interlingua in the Science Newsletter published by the Science Service from the early 1950s until his death in 1970.
IALA closed its doors in 1953 but was not formally dissolved until 1956 or later. Its role in promoting Interlingua was largely taken on by Science Service, which hired Gode as head of its newly formed Interlingua Division. Hugh E. Blair, Gode's close friend and colleague, became his assistant. A successor organization, the Interlingua Institute, was founded in 1970 to promote Interlingua in the US and Canada. The new institute supported the work of other linguistic organizations, made considerable scholarly contributions and produced Interlingua summaries for scholarly and medical publications. One of its largest achievements was two immense volumes on phytopathology produced by the American Phytopathological Society in 1976 and 1977.
Beginning in the 1980s, UMI has held international conferences every two years (typical attendance at the earlier meetings was 50 to 100) and launched a publishing programme that eventually produced over 100 volumes. Several Scandinavian schools undertook projects that used Interlingua as a means of teaching the international scientific and intellectual vocabulary.
In 2000, the Interlingua Institute was dissolved amid funding disputes with the UMI; the American Interlingua Society, established the following year, succeeded the institute.
The original goal of an interlanguage meant for global events has faced competition from English as a lingua franca and International English in the 21st century. The scientific community frequently uses English in international conferences and publications, for example, rather than Interlingua. However, the rise of the Internet has made it easier for the general public with an interest in constructed languages to learn Interlingua. Interlingua is promoted internationally by the Union Mundial pro Interlingua. Periodicals and books are produced by national organizations, such as the Societate American pro Interlingua, the Svenska Sällskapet för Interlingua, and the Union Brazilian pro Interlingua.
Panorama In Interlingua is the most prominent of several Interlingua periodicals. It is a 28-page magazine published bimonthly that covers current events, science, editorials, and Interlingua.
It is not certain how many people have an active knowledge of Interlingua. Most constructed languages other than Esperanto have very few speakers. The Hungarian census of 2001, which collected information about languages spoken, found just two people in the entire country who claimed to speak Interlingua.
Advocates say that Interlingua's greatest advantage is that it is the most widely understood international auxiliary language besides Interlingua (IL) de A.p.I. by virtue of its naturalistic (as opposed to schematic) grammar and vocabulary, allowing those familiar with a Romance language, and educated speakers of English, to read and understand it without prior study.
Interlingua web pages include editions of Wikipedia and Wiktionary, and a number of periodicals, including Panorama in Interlingua from the Union Mundial pro Interlingua (UMI).
Every two years, the UMI organizes an international conference in a different country. In the year between, the Scandinavian Interlingua societies co-organize a conference in Sweden, as a number of Interlingua speakers are in Scandinavia. National organizations such as the Union Brazilian pro Interlingua also organize regular conferences.
Interlingua is taught in some high schools and universities, sometimes as a means of teaching other languages quickly, presenting interlinguistics, or introducing an international vocabulary. A two-week course was taught at the University of Granada in Spain in 2007, for example.
As of 2019, Google Keyboard supports Interlingua.
Interlingua has a largely phonemic orthography.
Interlingua uses the 26 letters of the ISO basic Latin alphabet with no diacritics. The alphabet, pronunciation in IPA and letter names in Interlingua are:
The book Grammar of Interlingua defines in §15 a "collateral orthography" that defines how a word is spelt in Interlingua once assimilated regardless of etymology.
Interlingua is primarily a written language, and the pronunciation is not entirely settled. The sounds in parentheses are not used by all speakers.
For the most part, consonants are pronounced as in English, while the vowels are like Spanish. Written double consonants may be geminated as in Italian for extra clarity or pronounced as single as in English or French. Interlingua has five falling diphthongs, /ai/, /au/, /ei/, /eu/, and /oi/, although /ei/ and /oi/ are rare.
The general rule is that stress falls on the vowel before the last consonant (e.g., lingua, 'language', esser, 'to be', requirimento, 'requirement') ignoring the final plural -(e)s (e.g. linguas, the plural of lingua, still has the same stress as the singular), and where that is not possible, on the first vowel (via, 'way', io crea, 'I create'). There are a few exceptions, and the following rules account for most of them:
Speakers may pronounce all words according to the general rule mentioned above. For example, kilometro is acceptable, although kilometro is more common.
Interlingua has no explicitly defined phonotactics. However, the prototyping procedure for determining Interlingua words, which strives for internationality, should in general lead naturally to words that are easy for most learners to pronounce. In the process of forming new words, an ending cannot always be added without a modification of some kind in between. A good example is the plural -s, which is always preceded by a vowel to prevent the occurrence of a hard-to-pronounce consonant cluster at the end. If the singular does not end in a vowel, the final -s becomes -es.
Unassimilated foreign loanwords, or borrowed words, are spelled as in their language of origin. Their spelling may contain diacritics, or accent marks. If the diacritics do not affect pronunciation, they are removed.
Words in Interlingua may be taken from any language, as long as their internationality is verified by their presence in seven control languages: Spanish, Portuguese, Italian, French, and English, with German and Russian acting as secondary controls. These are the most widely spoken Romance, Germanic, and Slavic languages, respectively. Because of their close relationship, Spanish and Portuguese are treated as one unit. The largest number of Interlingua words are of Latin origin, with the Greek and Germanic languages providing the second and third largest number. The remainder of the vocabulary originates in Slavic and non-Indo-European languages.
A word, that is a form with meaning, is eligible for the Interlingua vocabulary if it is verified by at least three of the four primary control languages. Either secondary control language can substitute for a primary language. Any word of Indo-European origin found in a control language can contribute to the eligibility of an international word. In some cases, the archaic or potential presence of a word can contribute to its eligibility.
A word can be potentially present in a language when a derivative is present, but the word itself is not. English proximity, for example, gives support to Interlingua proxime, meaning 'near, close'. This counts as long as one or more control languages actually have this basic root word, which the Romance languages all do. Potentiality also occurs when a concept is represented as a compound or derivative in a control language, the morphemes that make it up are themselves international, and the combination adequately conveys the meaning of the larger word. An example is Italian fiammifero (lit. 'flamebearer'), meaning 'match, lucifer', which leads to Interlingua flammifero, or 'match'. This word is thus said to be potentially present in the other languages although they may represent the meaning with a single morpheme.
Words do not enter the Interlingua vocabulary solely because cognates exist in a sufficient number of languages. If their meanings have become different over time, they are considered different words for the purpose of Interlingua eligibility. If they still have one or more meanings in common, however, the word can enter Interlingua with this smaller set of meanings.
If this procedure did not produce an international word, the word for a concept was originally taken from Latin (see below). This only occurred with a few grammatical particles.
The form of an Interlingua word is considered an international prototype with respect to the other words. On the one hand, it should be neutral, free from characteristics peculiar to one language. On the other hand, it should maximally capture the characteristics common to all contributing languages. As a result, it can be transformed into any of the contributing variants using only these language-specific characteristics. If the word has any derivatives that occur in the source languages with appropriate parallel meanings, then their morphological connection must remain intact; for example, the Interlingua word for 'time' is spelled tempore and not *tempus or *tempo in order to match it with its derived adjectives, such as temporal.
The language-specific characteristics are closely related to the sound laws of the individual languages; the resulting words are often close or even identical to the most recent form common to the contributing words. This sometimes corresponds with that of Vulgar Latin. At other times, it is much more recent or even contemporary. It is never older than the classical period.
The French œil, Italian occhio, Spanish ojo, and Portuguese olho appear quite different, but they descend from a historical form oculus. German Auge, Dutch oog and English eye (cf. Czech and Polish oko, Russian and Ukrainian око (óko)) are related to this form in that all three descend from Proto-Indo-European *okʷ. In addition, international derivatives like ocular and oculista occur in all of Interlingua's control languages. Each of these forms contributes to the eligibility of the Interlingua word. German and English base words do not influence the form of the Interlingua word, because their Indo-European connection is considered too remote. Instead, the remaining base words and especially the derivatives determine the form oculo found in Interlingua.
Words can also be included in Interlingua by deriving them using Interlingua words and affixes; a method called free word-building. Thus, in the Interlingua–English Dictionary (IED), Alexander Gode followed the principle that every word listed is accompanied by all of its clear compounds and derivatives, along with the word or words it is derived from. A reader skimming through the IED notices many entries followed by large groups of derived and compound words. A good example is the Interlingua word nation, which is followed by national, nationalismo, nationalista, nationalitate, nationalisar, international, internationalitate, and many other words.
Other words in the IED do not have derivatives listed. Gode saw these words as potential word families. Although all derived words in the IED are found in at least one control language, speakers may make free use of Interlingua roots and affixes. For example, jada ('jade') can be used to form jadificar, ('to jadify, make into jade, make look like jade'), jadification, and so on. These word forms would be impermissible in English but would be good Interlingua.
Gode and Hugh E. Blair explained in the Interlingua Grammar that the basic principle of practical word-building is analogical. If a pattern can be found in the existing international vocabulary, new words can be formed according to that pattern. A meaning of the suffix -ista is 'person who practices the art or science of....' This suffix allows the derivation of biologista from biologia, physicista from physica, and so on. An Interlingua speaker can freely form saxophonista from saxophone and radiographista from radiographia by following the same pattern.
As noted above, the only limits to free word-building in Interlingua are clarity and usefulness. These concepts are touched upon here:
Any number of words could be formed by stringing roots and affixes together, but some would be more useful than others. For example, the English word rainer means 'a person who rains', but most people would be surprised that it is included in English dictionaries. The corresponding Interlingua word pluviator is unlikely to appear in a dictionary because of its lack of utility. Interlingua, like any traditional language, could build up large numbers of these words, but this would be undesirable.
Gode stressed the principle of clarity in free word-building. As Gode noted, the noun marinero ('mariner') can be formed from the adjective marin, because its meaning is clear. The noun marina meaning 'navy' cannot be formed, because its meaning would not be clear from the adjective and suffix that gave rise to it.
Interlingua has been developed to omit any grammatical feature that is absent from any one primary control language. Thus, Interlingua has no noun–adjective agreement by gender, case, or number (cf. Spanish and Portuguese gatas negras or Italian gatte nere, 'black female cats'), because this is absent from English, and it has no progressive verb tenses (English I am reading), because they are absent from French. Conversely, Interlingua distinguishes singular nouns from plural nouns because all the control languages do. With respect to the secondary control languages, Interlingua has articles, unlike Russian.
The definite article le is invariable, as in English ("the"). Nouns have no grammatical gender. Plurals are formed by adding -s, or -es after a final consonant. Personal pronouns take one form for the subject and one for the direct object and reflexive. In the third person, the reflexive is always se. Most adverbs are derived regularly from adjectives by adding -mente, or -amente after a -c. An adverb can be formed from any adjective in this way.
Verbs take the same form for all persons (io vive, tu vive, illa vive, 'I live', 'you live', 'she lives'). The indicative (pare, 'appear', 'appears') is the same as the imperative (pare! 'appear!'), and there is no subjunctive. Three common verbs usually take short forms in the present tense: es for 'is', 'am', 'are;' ha for 'has', 'have;' and va for 'go', 'goes'. A few irregular verb forms are available, but rarely used.
There are four simple tenses (present, past, future, and conditional), three compound tenses (past, future, and conditional), and the passive voice. The compound structures employ an auxiliary plus the infinitive or the past participle (e.g., Ille ha arrivate, 'He has arrived'). Simple and compound tenses can be combined in various ways to express more complex tenses (e.g., Nos haberea morite, 'We would have died').
Word order is subject–verb–object, except that a direct object pronoun or reflexive pronoun comes before the verb (io les vide, 'I see them'). Adjectives may precede or follow the nouns they modify, but they most often follow it. The position of adverbs is flexible, though constrained by common sense.
The grammar of Interlingua has been described as similar to that of the Romance languages, but simplified, primarily under the influence of English. A 1991 paper argued that Interlingua's grammar was similar to the simple grammars of Japanese and particularly Chinese.
F. P. Gopsill has written that Interlingua has no irregularities, although Gode's Interlingua Grammar suggests that Interlingua has a small number of irregularities.
One criticism that applies to naturalistic constructed languages in general is that if an educated traveler is willing to learn a naturalistic conlang, they may find it even more useful to learn a natural language outright, such as International English. Planned conlangs at least hold out the promise of "fixing" or standardizing certain irregular aspects of natural languages and providing unique advantages, despite the lack of speakers, but naturalistic conlangs have to compete with the natural languages they are based on. In practice, conferences with international attendance tend to be held in a natural language popular among the attendees rather than an international auxiliary language.
From an essay by Alexander Gode:
As with Esperanto, there have been proposals for a flag of Interlingua; the proposal by Czech translator Karel Podrazil is recognized by multilingual sites. It consists of a white four-pointed star extending to the edges of the flag and dividing it into an upper blue and lower red half. The star is symbolic of the four cardinal directions, and the two halves symbolize Romance and non-Romance speakers of Interlingua who understand each other.
Another symbol of Interlingua is the Blue Marble surrounded by twelve stars on a black or blue background, echoing the twelve stars of the Flag of Europe (because the source languages of Interlingua are purely European). | [
{
"paragraph_id": 0,
"text": "Interlingua (/ɪntərˈlɪŋɡwə/) is an international auxiliary language (IAL) developed between 1937 and 1951 by the American International Auxiliary Language Association (IALA). It is a constructed language of the \"naturalistic\" variety, whose vocabulary, grammar, and other characteristics are derived from natural languages. Interlingua literature maintains that (written) Interlingua is comprehensible to the hundreds of millions of people who speak Romance languages, though it is actively spoken by only a few hundred.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Interlingua was developed to combine a simple, mostly regular grammar with a vocabulary common to a wide range of western European languages, making it easy to learn for those whose native languages were sources of Interlingua's vocabulary and grammar.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The name Interlingua comes from the Latin words inter, meaning 'between', and lingua, meaning 'tongue' or 'language'. These morphemes are the same in Interlingua; thus, Interlingua would mean 'between language'.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Interlingua focuses on common vocabulary shared by Western European languages, which are often descended from the Latin language (such as the Romance languages) and Greek language. Interlingua organizers have four \"primary control languages\" where, by default, a word (or variant thereof) is expected to appear in at least three of them to qualify for inclusion in Interlingua. These are English; French; Italian; and a combination of Spanish and Portuguese which are treated as a single mega-language for Interlingua purposes. Additionally, German and Russian have been dubbed \"secondary control languages\". While the result is often akin to Neo-Latin as the most frequent source of commonality, Interlingua words can have origins in any language, as long as they have drifted into the primary control languages as loanwords. For example, the Japanese words geisha and samurai are used as-is in most Western European languages, and are in Interlingua as well; the same with the likes of Guugu Yimithirr gangurru (Interlingua: kanguru, English: kangaroo) or the Finnish sauna.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "The maintainers of Interlingua attempt to keep the grammar simple and word formation regular, and use only a small number of roots and affixes. This is intended to make the language quicker to learn.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "The American heiress Alice Vanderbilt Morris (1874–1950) became interested in linguistics and the international auxiliary language movement in the early 1920s. In 1924, Morris and her husband, Dave Hennen Morris, established the non-profit International Auxiliary Language Association (IALA) in New York City. Their aim was to place the study of IALs on a more complex and scientific basis. Morris developed the research program of IALA in consultation with Edward Sapir, William Edward Collinson, and Otto Jespersen.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Investigations of the auxiliary language problem were in progress at the International Research Council, the American Council on Education, the American Council of Learned Societies, the British, French, Italian, and American Associations for the advancement of science, and other groups of specialists. Morris created IALA as a continuation of this work.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The IALA became a major supporter of mainstream American linguistics. Numerous studies by Sapir, Collinson, and Morris Swadesh in the 1930s and 1940s, for example, were funded by IALA. Alice Morris edited several of these studies and provided much of IALA's financial support. For example, Morris herself edited Sapir and Morris Swadesh's 1932 cross-linguistic study of ending-point phenomena, and Collinson's 1937 study of indication. IALA also received support from groups such as the Carnegie Corporation, the Ford Foundation, the Research Corporation, and the Rockefeller Foundation.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In its early years, IALA concerned itself with three tasks: finding other organizations around the world with similar goals; building a library of books about languages and interlinguistics; and comparing extant IALs, including Esperanto, Esperanto II, Ido, Peano's Interlingua (Latino sine flexione), Novial, and Interlingue (Occidental). In pursuit of the last goal, it conducted parallel studies of these languages, with comparative studies of national languages.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "At the Second International Interlanguage Congress, held in Geneva in 1931, IALA began to break new ground; 27 recognized linguists signed a testimonial of support for IALA's research program. An additional eight added their signatures at the third congress, convened in Rome in 1933. That same year, Herbert N. Shenton and Edward Thorndike became influential in IALA's work by authoring studies in the interlinguistic field.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first steps towards the finalization of Interlingua were taken in 1937, when a committee of 24 linguists from 19 universities published Some Criteria for an International Language and Commentary. However, the outbreak of World War II in 1939 cut short the intended biannual meetings of the committee.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Originally, the association had not intended to create its own language. Its goal was to identify which auxiliary language already available was best suited for international communication, and how to promote it more effectively. However, after ten years of research, many members of IALA concluded that none of the existing interlanguages were up to the task. By 1937, the members had made the decision to create a new language, to the surprise of the world's interlanguage community.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "To that point, much of the debate had been equivocal on the decision to use naturalistic (e.g., Peano's Interlingua, Novial and Occidental) or systematic (e.g., Esperanto and Ido) words. During the war years, proponents of a naturalistic interlanguage won out. The first support was Thorndike's paper; the second was a concession by proponents of the systematic languages that thousands of words were already present in many, or even a majority, of the European languages. Their argument was that systematic derivation of words was a Procrustean bed, forcing the learner to unlearn and re-memorize a new derivation scheme when a usable vocabulary was already available. IALA from that point assumed the position that a naturalistic language would be best.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "IALA's research activities were based in Liverpool, before relocating to New York due to the outbreak of World War II, where E. Clark Stillman established a new research staff. Stillman, with the assistance of Alexander Gode, constructed the methodology for selecting Interlingua vocabulary based on a comparison of control languages.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1943 Stillman left for war work and Gode became Acting Director of Research. IALA began to develop models of the proposed language, the first of which were presented in Morris's General Report in 1945.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "From 1946 to 1948, French linguist André Martinet was Director of Research. During this period IALA continued to develop models and conducted polling to determine the optimal form of the final language. In 1946, IALA sent an extensive survey to more than 3,000 language teachers and related professionals on three continents.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Model P was unchanged from 1945; Model M was relatively modern in comparison to more classical P. Model K was slightly modified in the direction of Ido. The resulting four models that were canvassed were:",
"title": "History"
},
{
"paragraph_id": 17,
"text": "An example sentence:",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The vote total ended up as follows: P 26.6%, M 37.5%, C 20%, and K 15%. The two more schematic models, C and K, were rejected. Of the two naturalistic models, M attracted somewhat more support than P. Taking national biases into account (for example, the French who were polled disproportionately favored Model M), IALA decided on a compromise between models M and P, with certain elements of C.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The German-American Gode and the French Martinet did not get along. Martinet resigned and took up a position at Columbia University in 1948, and Gode took on the last phase of Interlingua's development. His task was to combine elements of Model M and Model P; take the flaws seen in both by the polled community and repair them with elements of Model C as needed; and develop a vocabulary. Alice Vanderbilt Morris died in 1950, and the funding that had sustained IALA ceased, but sufficient funds remained to publish a dictionary and grammar. The vocabulary and grammar of Interlingua were first presented in 1951, when IALA published the finalized Interlingua Grammar and the Interlingua–English Dictionary (IED). In 1954, IALA published an introductory manual entitled Interlingua a Prime Vista (\"Interlingua at First Sight\").",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Interlingua as presented by the IALA is very close to Peano's Interlingua (Latino sine flexione), both in its grammar and especially in its vocabulary. A distinct abbreviation was adopted: IA instead of IL.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "An early practical application of Interlingua was the scientific newsletter Spectroscopia Molecular, published from 1952 to 1980. In 1954, the Second World Cardiological Congress in Washington, D.C. released summaries of its talks in both English and Interlingua. Within a few years, it found similar use at nine further medical congresses. Between the mid-1950s and the late 1970s, some thirty scientific and medical journals provided article summaries in Interlingua. Gode wrote a monthly column in Interlingua in the Science Newsletter published by the Science Service from the early 1950s until his death in 1970.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "IALA closed its doors in 1953 but was not formally dissolved until 1956 or later. Its role in promoting Interlingua was largely taken on by Science Service, which hired Gode as head of its newly formed Interlingua Division. Hugh E. Blair, Gode's close friend and colleague, became his assistant. A successor organization, the Interlingua Institute, was founded in 1970 to promote Interlingua in the US and Canada. The new institute supported the work of other linguistic organizations, made considerable scholarly contributions and produced Interlingua summaries for scholarly and medical publications. One of its largest achievements was two immense volumes on phytopathology produced by the American Phytopathological Society in 1976 and 1977.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Beginning in the 1980s, UMI has held international conferences every two years (typical attendance at the earlier meetings was 50 to 100) and launched a publishing programme that eventually produced over 100 volumes. Several Scandinavian schools undertook projects that used Interlingua as a means of teaching the international scientific and intellectual vocabulary.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 2000, the Interlingua Institute was dissolved amid funding disputes with the UMI; the American Interlingua Society, established the following year, succeeded the institute.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The original goal of an interlanguage meant for global events has faced competition from English as a lingua franca and International English in the 21st century. The scientific community frequently uses English in international conferences and publications, for example, rather than Interlingua. However, the rise of the Internet has made it easier for the general public with an interest in constructed languages to learn Interlingua. Interlingua is promoted internationally by the Union Mundial pro Interlingua. Periodicals and books are produced by national organizations, such as the Societate American pro Interlingua, the Svenska Sällskapet för Interlingua, and the Union Brazilian pro Interlingua.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Panorama In Interlingua is the most prominent of several Interlingua periodicals. It is a 28-page magazine published bimonthly that covers current events, science, editorials, and Interlingua.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "It is not certain how many people have an active knowledge of Interlingua. Most constructed languages other than Esperanto have very few speakers. The Hungarian census of 2001, which collected information about languages spoken, found just two people in the entire country who claimed to speak Interlingua.",
"title": "Community"
},
{
"paragraph_id": 28,
"text": "Advocates say that Interlingua's greatest advantage is that it is the most widely understood international auxiliary language besides Interlingua (IL) de A.p.I. by virtue of its naturalistic (as opposed to schematic) grammar and vocabulary, allowing those familiar with a Romance language, and educated speakers of English, to read and understand it without prior study.",
"title": "Community"
},
{
"paragraph_id": 29,
"text": "Interlingua web pages include editions of Wikipedia and Wiktionary, and a number of periodicals, including Panorama in Interlingua from the Union Mundial pro Interlingua (UMI).",
"title": "Community"
},
{
"paragraph_id": 30,
"text": "Every two years, the UMI organizes an international conference in a different country. In the year between, the Scandinavian Interlingua societies co-organize a conference in Sweden, as a number of Interlingua speakers are in Scandinavia. National organizations such as the Union Brazilian pro Interlingua also organize regular conferences.",
"title": "Community"
},
{
"paragraph_id": 31,
"text": "Interlingua is taught in some high schools and universities, sometimes as a means of teaching other languages quickly, presenting interlinguistics, or introducing an international vocabulary. A two-week course was taught at the University of Granada in Spain in 2007, for example.",
"title": "Community"
},
{
"paragraph_id": 32,
"text": "As of 2019, Google Keyboard supports Interlingua.",
"title": "Community"
},
{
"paragraph_id": 33,
"text": "Interlingua has a largely phonemic orthography.",
"title": "Orthography"
},
{
"paragraph_id": 34,
"text": "Interlingua uses the 26 letters of the ISO basic Latin alphabet with no diacritics. The alphabet, pronunciation in IPA and letter names in Interlingua are:",
"title": "Orthography"
},
{
"paragraph_id": 35,
"text": "The book Grammar of Interlingua defines in §15 a \"collateral orthography\" that defines how a word is spelt in Interlingua once assimilated regardless of etymology.",
"title": "Orthography"
},
{
"paragraph_id": 36,
"text": "Interlingua is primarily a written language, and the pronunciation is not entirely settled. The sounds in parentheses are not used by all speakers.",
"title": "Phonology"
},
{
"paragraph_id": 37,
"text": "For the most part, consonants are pronounced as in English, while the vowels are like Spanish. Written double consonants may be geminated as in Italian for extra clarity or pronounced as single as in English or French. Interlingua has five falling diphthongs, /ai/, /au/, /ei/, /eu/, and /oi/, although /ei/ and /oi/ are rare.",
"title": "Phonology"
},
{
"paragraph_id": 38,
"text": "The general rule is that stress falls on the vowel before the last consonant (e.g., lingua, 'language', esser, 'to be', requirimento, 'requirement') ignoring the final plural -(e)s (e.g. linguas, the plural of lingua, still has the same stress as the singular), and where that is not possible, on the first vowel (via, 'way', io crea, 'I create'). There are a few exceptions, and the following rules account for most of them:",
"title": "Phonology"
},
{
"paragraph_id": 39,
"text": "Speakers may pronounce all words according to the general rule mentioned above. For example, kilometro is acceptable, although kilometro is more common.",
"title": "Phonology"
},
{
"paragraph_id": 40,
"text": "Interlingua has no explicitly defined phonotactics. However, the prototyping procedure for determining Interlingua words, which strives for internationality, should in general lead naturally to words that are easy for most learners to pronounce. In the process of forming new words, an ending cannot always be added without a modification of some kind in between. A good example is the plural -s, which is always preceded by a vowel to prevent the occurrence of a hard-to-pronounce consonant cluster at the end. If the singular does not end in a vowel, the final -s becomes -es.",
"title": "Phonology"
},
{
"paragraph_id": 41,
"text": "Unassimilated foreign loanwords, or borrowed words, are spelled as in their language of origin. Their spelling may contain diacritics, or accent marks. If the diacritics do not affect pronunciation, they are removed.",
"title": "Phonology"
},
{
"paragraph_id": 42,
"text": "Words in Interlingua may be taken from any language, as long as their internationality is verified by their presence in seven control languages: Spanish, Portuguese, Italian, French, and English, with German and Russian acting as secondary controls. These are the most widely spoken Romance, Germanic, and Slavic languages, respectively. Because of their close relationship, Spanish and Portuguese are treated as one unit. The largest number of Interlingua words are of Latin origin, with the Greek and Germanic languages providing the second and third largest number. The remainder of the vocabulary originates in Slavic and non-Indo-European languages.",
"title": "Vocabulary"
},
{
"paragraph_id": 43,
"text": "A word, that is a form with meaning, is eligible for the Interlingua vocabulary if it is verified by at least three of the four primary control languages. Either secondary control language can substitute for a primary language. Any word of Indo-European origin found in a control language can contribute to the eligibility of an international word. In some cases, the archaic or potential presence of a word can contribute to its eligibility.",
"title": "Vocabulary"
},
{
"paragraph_id": 44,
"text": "A word can be potentially present in a language when a derivative is present, but the word itself is not. English proximity, for example, gives support to Interlingua proxime, meaning 'near, close'. This counts as long as one or more control languages actually have this basic root word, which the Romance languages all do. Potentiality also occurs when a concept is represented as a compound or derivative in a control language, the morphemes that make it up are themselves international, and the combination adequately conveys the meaning of the larger word. An example is Italian fiammifero (lit. 'flamebearer'), meaning 'match, lucifer', which leads to Interlingua flammifero, or 'match'. This word is thus said to be potentially present in the other languages although they may represent the meaning with a single morpheme.",
"title": "Vocabulary"
},
{
"paragraph_id": 45,
"text": "Words do not enter the Interlingua vocabulary solely because cognates exist in a sufficient number of languages. If their meanings have become different over time, they are considered different words for the purpose of Interlingua eligibility. If they still have one or more meanings in common, however, the word can enter Interlingua with this smaller set of meanings.",
"title": "Vocabulary"
},
{
"paragraph_id": 46,
"text": "If this procedure did not produce an international word, the word for a concept was originally taken from Latin (see below). This only occurred with a few grammatical particles.",
"title": "Vocabulary"
},
{
"paragraph_id": 47,
"text": "The form of an Interlingua word is considered an international prototype with respect to the other words. On the one hand, it should be neutral, free from characteristics peculiar to one language. On the other hand, it should maximally capture the characteristics common to all contributing languages. As a result, it can be transformed into any of the contributing variants using only these language-specific characteristics. If the word has any derivatives that occur in the source languages with appropriate parallel meanings, then their morphological connection must remain intact; for example, the Interlingua word for 'time' is spelled tempore and not *tempus or *tempo in order to match it with its derived adjectives, such as temporal.",
"title": "Vocabulary"
},
{
"paragraph_id": 48,
"text": "The language-specific characteristics are closely related to the sound laws of the individual languages; the resulting words are often close or even identical to the most recent form common to the contributing words. This sometimes corresponds with that of Vulgar Latin. At other times, it is much more recent or even contemporary. It is never older than the classical period.",
"title": "Vocabulary"
},
{
"paragraph_id": 49,
"text": "The French œil, Italian occhio, Spanish ojo, and Portuguese olho appear quite different, but they descend from a historical form oculus. German Auge, Dutch oog and English eye (cf. Czech and Polish oko, Russian and Ukrainian око (óko)) are related to this form in that all three descend from Proto-Indo-European *okʷ. In addition, international derivatives like ocular and oculista occur in all of Interlingua's control languages. Each of these forms contributes to the eligibility of the Interlingua word. German and English base words do not influence the form of the Interlingua word, because their Indo-European connection is considered too remote. Instead, the remaining base words and especially the derivatives determine the form oculo found in Interlingua.",
"title": "Vocabulary"
},
{
"paragraph_id": 50,
"text": "Words can also be included in Interlingua by deriving them using Interlingua words and affixes; a method called free word-building. Thus, in the Interlingua–English Dictionary (IED), Alexander Gode followed the principle that every word listed is accompanied by all of its clear compounds and derivatives, along with the word or words it is derived from. A reader skimming through the IED notices many entries followed by large groups of derived and compound words. A good example is the Interlingua word nation, which is followed by national, nationalismo, nationalista, nationalitate, nationalisar, international, internationalitate, and many other words.",
"title": "Vocabulary"
},
{
"paragraph_id": 51,
"text": "Other words in the IED do not have derivatives listed. Gode saw these words as potential word families. Although all derived words in the IED are found in at least one control language, speakers may make free use of Interlingua roots and affixes. For example, jada ('jade') can be used to form jadificar, ('to jadify, make into jade, make look like jade'), jadification, and so on. These word forms would be impermissible in English but would be good Interlingua.",
"title": "Vocabulary"
},
{
"paragraph_id": 52,
"text": "Gode and Hugh E. Blair explained in the Interlingua Grammar that the basic principle of practical word-building is analogical. If a pattern can be found in the existing international vocabulary, new words can be formed according to that pattern. A meaning of the suffix -ista is 'person who practices the art or science of....' This suffix allows the derivation of biologista from biologia, physicista from physica, and so on. An Interlingua speaker can freely form saxophonista from saxophone and radiographista from radiographia by following the same pattern.",
"title": "Vocabulary"
},
{
"paragraph_id": 53,
"text": "As noted above, the only limits to free word-building in Interlingua are clarity and usefulness. These concepts are touched upon here:",
"title": "Vocabulary"
},
{
"paragraph_id": 54,
"text": "Any number of words could be formed by stringing roots and affixes together, but some would be more useful than others. For example, the English word rainer means 'a person who rains', but most people would be surprised that it is included in English dictionaries. The corresponding Interlingua word pluviator is unlikely to appear in a dictionary because of its lack of utility. Interlingua, like any traditional language, could build up large numbers of these words, but this would be undesirable.",
"title": "Vocabulary"
},
{
"paragraph_id": 55,
"text": "Gode stressed the principle of clarity in free word-building. As Gode noted, the noun marinero ('mariner') can be formed from the adjective marin, because its meaning is clear. The noun marina meaning 'navy' cannot be formed, because its meaning would not be clear from the adjective and suffix that gave rise to it.",
"title": "Vocabulary"
},
{
"paragraph_id": 56,
"text": "Interlingua has been developed to omit any grammatical feature that is absent from any one primary control language. Thus, Interlingua has no noun–adjective agreement by gender, case, or number (cf. Spanish and Portuguese gatas negras or Italian gatte nere, 'black female cats'), because this is absent from English, and it has no progressive verb tenses (English I am reading), because they are absent from French. Conversely, Interlingua distinguishes singular nouns from plural nouns because all the control languages do. With respect to the secondary control languages, Interlingua has articles, unlike Russian.",
"title": "Grammar"
},
{
"paragraph_id": 57,
"text": "The definite article le is invariable, as in English (\"the\"). Nouns have no grammatical gender. Plurals are formed by adding -s, or -es after a final consonant. Personal pronouns take one form for the subject and one for the direct object and reflexive. In the third person, the reflexive is always se. Most adverbs are derived regularly from adjectives by adding -mente, or -amente after a -c. An adverb can be formed from any adjective in this way.",
"title": "Grammar"
},
{
"paragraph_id": 58,
"text": "Verbs take the same form for all persons (io vive, tu vive, illa vive, 'I live', 'you live', 'she lives'). The indicative (pare, 'appear', 'appears') is the same as the imperative (pare! 'appear!'), and there is no subjunctive. Three common verbs usually take short forms in the present tense: es for 'is', 'am', 'are;' ha for 'has', 'have;' and va for 'go', 'goes'. A few irregular verb forms are available, but rarely used.",
"title": "Grammar"
},
{
"paragraph_id": 59,
"text": "There are four simple tenses (present, past, future, and conditional), three compound tenses (past, future, and conditional), and the passive voice. The compound structures employ an auxiliary plus the infinitive or the past participle (e.g., Ille ha arrivate, 'He has arrived'). Simple and compound tenses can be combined in various ways to express more complex tenses (e.g., Nos haberea morite, 'We would have died').",
"title": "Grammar"
},
{
"paragraph_id": 60,
"text": "Word order is subject–verb–object, except that a direct object pronoun or reflexive pronoun comes before the verb (io les vide, 'I see them'). Adjectives may precede or follow the nouns they modify, but they most often follow it. The position of adverbs is flexible, though constrained by common sense.",
"title": "Grammar"
},
{
"paragraph_id": 61,
"text": "The grammar of Interlingua has been described as similar to that of the Romance languages, but simplified, primarily under the influence of English. A 1991 paper argued that Interlingua's grammar was similar to the simple grammars of Japanese and particularly Chinese.",
"title": "Grammar"
},
{
"paragraph_id": 62,
"text": "F. P. Gopsill has written that Interlingua has no irregularities, although Gode's Interlingua Grammar suggests that Interlingua has a small number of irregularities.",
"title": "Grammar"
},
{
"paragraph_id": 63,
"text": "One criticism that applies to naturalistic constructed languages in general is that if an educated traveler is willing to learn a naturalistic conlang, they may find it even more useful to learn a natural language outright, such as International English. Planned conlangs at least hold out the promise of \"fixing\" or standardizing certain irregular aspects of natural languages and providing unique advantages, despite the lack of speakers, but naturalistic conlangs have to compete with the natural languages they are based on. In practice, conferences with international attendance tend to be held in a natural language popular among the attendees rather than an international auxiliary language.",
"title": "Reception"
},
{
"paragraph_id": 64,
"text": "From an essay by Alexander Gode:",
"title": "Samples"
},
{
"paragraph_id": 65,
"text": "As with Esperanto, there have been proposals for a flag of Interlingua; the proposal by Czech translator Karel Podrazil is recognized by multilingual sites. It consists of a white four-pointed star extending to the edges of the flag and dividing it into an upper blue and lower red half. The star is symbolic of the four cardinal directions, and the two halves symbolize Romance and non-Romance speakers of Interlingua who understand each other.",
"title": "Flags and symbols"
},
{
"paragraph_id": 66,
"text": "Another symbol of Interlingua is the Blue Marble surrounded by twelve stars on a black or blue background, echoing the twelve stars of the Flag of Europe (because the source languages of Interlingua are purely European).",
"title": "Flags and symbols"
}
]
| Interlingua is an international auxiliary language (IAL) developed between 1937 and 1951 by the American International Auxiliary Language Association (IALA). It is a constructed language of the "naturalistic" variety, whose vocabulary, grammar, and other characteristics are derived from natural languages. Interlingua literature maintains that (written) Interlingua is comprehensible to the hundreds of millions of people who speak Romance languages, though it is actively spoken by only a few hundred. Interlingua was developed to combine a simple, mostly regular grammar with a vocabulary common to a wide range of western European languages, making it easy to learn for those whose native languages were sources of Interlingua's vocabulary and grammar. The name Interlingua comes from the Latin words inter, meaning 'between', and lingua, meaning 'tongue' or 'language'. These morphemes are the same in Interlingua; thus, Interlingua would mean 'between language'. | 2001-09-21T08:26:51Z | 2023-12-17T17:35:41Z | [
"Template:IPA link",
"Template:Efn",
"Template:Notelist",
"Template:Constructed languages",
"Template:Blockquote",
"Template:Curlie",
"Template:Short description",
"Template:Distinguish",
"Template:Lang",
"Template:Transl",
"Template:Main",
"Template:About",
"Template:Infobox language",
"Template:Columns-start",
"Template:Cite book",
"Template:Harvnb",
"Template:Use Oxford spelling",
"Template:IPAc-en",
"Template:Wiktla",
"Template:Asof",
"Template:PIE",
"Template:Expand section",
"Template:IPA",
"Template:Cite journal",
"Template:Citation needed",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite web",
"Template:Official website",
"Template:Unreferenced section",
"Template:Column",
"Template:Columns-end",
"Template:Sister bar",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Interlingua |
15,102 | Isle of Wight | The Isle of Wight (/waɪt/ WYTE) is an island, English county and unitary authority in the English Channel, 2 to 5 miles (3.2 to 8.0 kilometres) off the coast of Hampshire, across the Solent. It is the largest and second-most populous island in England. Referred to as "The Island" by residents, the Isle of Wight has resorts that have been popular holiday destinations since Victorian times. It is known for its mild climate, coastal scenery, and verdant landscape of fields, downland, and chines. The island is historically part of Hampshire. The island is designated a UNESCO Biosphere Reserve.
The island has been home to the poets Algernon Charles Swinburne and Alfred, Lord Tennyson. Queen Victoria built her summer residence and final home, Osborne House at East Cowes on the Isle. It has a maritime and industrial tradition of boat-building, sail-making, the manufacture of flying boats, hovercraft, and Britain's space rockets. The island hosts annual music festivals, including the Isle of Wight Festival, which in 1970 was the largest rock music event ever held. It has well-conserved wildlife and some of Europe's richest cliffs and quarries of dinosaur fossils.
The island has played an essential part in the defence of the ports of Southampton and Portsmouth and has been near the front line of conflicts through the ages, having faced the Spanish Armada and weathered the Battle of Britain. Being rural for most of its history, its Victorian fashionability and the growing affordability of holidays led to significant urban development during the late 19th and early 20th centuries.
The island became a separate administrative county in 1890, independent of Hampshire. It continued to share the Lord Lieutenant of Hampshire until 1974, when it was made a ceremonial county in its own right. The island no longer has administrative links to Hampshire. However, the two counties share their police force and fire and rescue service, and the island's Anglican churches belong to the Diocese of Portsmouth (originally Winchester). A combined local authority with Portsmouth and Southampton was considered as part of a regional devolution package but was subsequently rejected by the UK government in 2018.
The quickest public transport link to the mainland is the hovercraft (Hovertravel) from Ryde to Southsea. Three vehicle ferries and two catamaran services cross the Solent to Southampton, Lymington, and Portsmouth via the island's largest ferry operator, Wightlink, and the island's second-largest ferry company, Red Funnel. Tourism is the largest industry on the island.
The oldest records that give a name for the Isle of Wight are from the Roman Empire. It was called Vectis or Vecta in Latin and Iktis or Ouiktis in Greek. Latin Vecta, Old English Wiht, and Old Welsh Gueid and Guith were recorded from the Anglo-Saxon period. The Domesday Book called the island Wit. The modern Welsh name is Ynys Wyth (ynys meaning island). These are all variants of the same name, possibly Celtic in origin.
Inhabitants of the Isle of Wight were known as Wihtware.
During the Pleistocene glacial periods, sea levels were lower, and the present-day Solent was part of the valley of the Solent River. The river flowed eastward from Dorset, following the course of the modern Solent strait, before travelling south and southwest towards the major Channel River system. At these times, extensive gravel terraces associated with the Solent River and the forerunners of the island's modern rivers were deposited. During warmer interglacial periods, silts, beach gravels, clays, and muds of marine and estuarine origin were deposited due to higher sea levels, similar to those experienced today.
The earliest clear evidence of Lower Palaeolithic archaic human occupation on what is now the Isle of Wight is found close to Priory Bay. More than 300 acheulean handaxes have been recovered from the beach and cliff slopes, originating from a sequence of Pleistocene gravels dating approximately to MIS 11-MIS 9 (424,000–374,000 years ago). Reworked and abraded artefacts found at the site may be considerably older, however, closer to 500,000 years old. The identity of the hominids who produced these tools is unknown. However, sites and fossils of the same age range in Europe are often attributed to Homo heidelbergensis or early populations of Neanderthals.
A Middle Palaeolithic Mousterian flint assemblage, consisting of 50 handaxes and debitage, has been recovered from Great Pan Farm in the Medina Valley near Newport. Gravel sequences at the site have been dated to the MIS 3 interstadial during the last glacial period (c. 50,000 years ago). These tools are associated with the late Neanderthal occupation, and evidence of late Neanderthal presence is seen across Britain at this time.
No significant evidence of Upper Palaeolithic activity exists on the Isle of Wight. This period is associated with the expansion and establishment of populations of modern human (Homo sapiens) hunter-gatherers in Europe, beginning around 45,000 years ago. However, evidence of late Upper Palaeolithic activity has been found at nearby sites on the mainland, notably Hengistbury Head in Dorset, dating to just before the onset of the Holocene and the end of the last glacial period.
A submerged escarpment 11m below sea level off Bouldnor Cliff on the island's northwest coastline is home to an internationally significant mesolithic archaeological site. The site has yielded evidence of seasonal occupation by Mesolithic hunter-gatherers dating to c. 6050 BC. Finds include flint tools, burnt flint, worked timbers, wooden platforms, and pits. The worked wood shows evidence of splitting large planks from oak trunks, interpreted as being intended for use as dug-out canoes. DNA analysis of sediments at the site yielded wheat DNA, not found in Britain until the Neolithic 2,000 years after the occupation at Bouldnor Cliff. It has been suggested this is evidence of wide-reaching trade in Mesolithic Europe; however, the contemporaneity of the wheat with the Mesolithic occupation has been contested. When hunter-gatherers used the site, it was located on a river bank surrounded by wetlands and woodland. As sea levels rose throughout the Holocene, the river valley slowly flooded, submerging the site.
Evidence of Mesolithic occupation on the island is generally found along the river valleys, particularly along the north of the island and in the former catchment of the western Yar. Other key sites are found at Newtown Creek, Werrar, and Wootton-Quarr.
Flint tools and monuments attest to neolithic occupation on the Isle of Wight. Unlike the previous Mesolithic hunter-gatherer population, Neolithic communities on the Isle of Wight were based on farming and linked to a migration of Neolithic populations from France and northwest Europe to Britain c. 6,000 years ago.
The Isle of Wight's most visible Neolithic site is the Longstone at Mottistone, the remains of an early Neolithic long barrow. Constructed initially with two standing stones at the entrance, only one remains today. A Neolithic mortuary enclosure has been identified on Tennyson Down near Freshwater.
Bronze Age Britain had large tin reserves in Cornwall and Devon areas, which was necessary to smelt bronze. At that time, the sea level was much lower, and carts of tin were brought across the Solent at low tide for export, possibly on the Ferriby Boats. Anthony Snodgrass suggests that a shortage of tin, as a part of the Bronze Age Collapse and trade disruptions in the Mediterranean around 1300 BC, forced metalworkers to seek an alternative to bronze. From the 7th century BC, during the Late Iron Age, the Isle of Wight, like the rest of Great Britain, was occupied by the Celtic Britons, in the form of the Durotriges tribe, as attested by finds of their coins, for example, the South Wight Hoard, and the Shalfleet Hoard. The island was known as Ynys Weith in Brittonic Celtic. Southeastern Britain experienced significant immigration, which is reflected in the current residents' genetic makeup. As the Iron Age began, tin value likely dropped sharply, greatly changing the Isle of Wight's economy. Trade, however, continued, as evidenced by the local abundance of European Iron Age coins.
Julius Caesar reported that the Belgae took the Isle of Wight in about 85 BC and recognised the culture of this general region as "Belgic" but made no reference to Vectis. The Roman historian Suetonius mentions that the island was captured by the commander Vespasian. The Romans built no towns on the island, but the remains of at least seven Roman villas have been found, indicating the prosperity of local agriculture. First-century exports were principally hides, enslaved people, hunting dogs, grain, cattle, silver, gold, and iron.
There are indications that the island had vast trading links, with a port at Bouldnor, evidence of Bronze Age tin trading, and finds of Late Iron Age coins. Starting in AD 449, the 5th and 6th centuries saw groups of Germanic-speaking peoples from Northern Europe crossing the English Channel and gradually set about conquering the region.
During the Early Middle Ages, the island was settled by Jutes as the pagan kingdom of the Wihtwara under King Arwald. In 685, it was invaded by King Cædwalla of Wessex, who tried to replace the inhabitants with his followers. Though in 686, Arwald was defeated, and the island became the last part of English lands to be converted to Christianity, Cædwalla was unsuccessful in driving the Jutes from the island. Wight was added to Wessex and became part of England under King Alfred the Great, including within the shire of Hampshire.
It suffered especially from Viking raids and was often used as a winter base by Viking raiders when they could not reach Normandy. Later, both Earl Tostig and his brother Harold Godwinson (who became King Harold II) held manors on the island.
The Norman Conquest of 1066 created the position of Lord of the Isle of Wight; the island was given by William the Conqueror to his kinsman William FitzOsbern. Carisbrooke Priory and the fort of Carisbrooke Castle were then founded. Allegiance was sworn to FitzOsbern rather than the king; the Lordship was subsequently granted to the de Redvers family by Henry I after his succession in 1100.
For nearly 200 years the island was a semi-independent feudal fiefdom, with the de Redvers family ruling from Carisbrooke. The final private owner was the Countess Isabella de Fortibus, who, on her deathbed in 1293, was persuaded to sell it to Edward I. Subsequently, the island was under the control of the English Crown and its Lordship a royal appointment.
The island continued to be attacked from the continent: it was raided in 1374 by the fleet of Castile and in 1377 by French raiders who burned several towns, including Newtown.
Under Henry VIII, who developed the Royal Navy and its Portsmouth base, the island was fortified at Yarmouth, Cowes, East Cowes, and Sandown.
The French invasion on 21 July 1545 (famous for the sinking of the Mary Rose on the 19th) was repulsed by local militia.
During the English Civil War, King Charles I fled to the Isle of Wight, believing he would receive sympathy from Governor Robert Hammond. Still, Hammond imprisoned the king in Carisbrooke Castle.
During the Seven Years' War, the island was a staging post for British troops departing on expeditions against the French coast, such as the Raid on Rochefort. During 1759, with a planned French invasion imminent, a large force of soldiers was stationed there. The French called off their invasion following the Battle of Quiberon Bay.
In the spring of 1817, the twenty-one year old John Keats spent time in Carisbrooke and Shanklin, where he found inspiration in the countryside and coast, and worked on his long poem Endymion.
In the mid-1840s, potato blight was first found in the UK on the island, having arrived from Belgium. It was later transmitted to Ireland.
In the 1860s, what remains in real terms the most expensive ever government spending project saw fortifications built on the island and in the Solent, as well as elsewhere along the south coast, including the Palmerston Forts, The Needles Batteries, and Fort Victoria, because of fears about possible French invasion.
The future Queen Victoria spent childhood holidays on the island and became fond of it. When she became queen, she made Osborne House her winter home. Subsequently, the island became a fashionable holiday resort for many, including Alfred, Lord Tennyson, Julia Margaret Cameron, and Charles Dickens (who wrote much of David Copperfield there), as well as the French painter Berthe Morisot and members of European royalty.
Until the queen's example, the island had been rural, with most people employed in farming, fishing, or boat-building. The boom in tourism, spurred by growing wealth and leisure time and by Victoria's presence, led to the significant urban development of the island's coastal resorts. As one report summarises, "The Queen's regular presence on the island helped put the Isle of Wight 'on the map' as a Victorian holiday and wellness destination ... and her former residence Osborne House is now one of the most visited attractions on the island." While on the island, the queen used a bathing machine that could be wheeled into the water on Osborne Beach; inside the small wooden hut, she could undress and then bathe, without being visible to others. Her machine had a changing room and a WC with plumbing. The refurbished machine is now displayed at the beach.
On 14 January 1878, Alexander Graham Bell demonstrated an early version of the telephone to the queen, placing calls to Cowes, Southampton, and London. These were the first publicly-witnessed long-distance telephone calls in the UK. The queen tried the device and considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her.
The world's first radio station was set up by Guglielmo Marconi in 1897, during her reign, at the Needles Battery, at the western tip of the island. A 168-foot (51 m) high mast was erected near the Royal Needles Hotel as part of an experiment on communicating with ships at sea. That location is now the site of the Marconi Monument. In 1898 the first paid wireless telegram (called a "Marconigram") was sent from this station, and the island was for some time the home of the National Wireless Museum near Ryde.
Queen Victoria died at Osborne House on 22 January 1901 at 81.
During the Second World War, the island was frequently bombed. With its proximity to German-occupied France, the island hosted observation stations, transmitters, and the RAF radar station at Ventnor. Adolf Hitler personally suggested an invasion of the Isle of Wight as a supplementary operation for Operation Sealion, and the possibility of an invasion was incorporated into Fuhrer Directive 16. Field Marshal Alan Brooke, in charge of defending the UK during 1940, was sceptical about being able to hold the island in the face of an invasion, instead considering that British forces would retreat to the western side of the island rather than commit forces against what might be a diversionary landing. In the end no invasion of the island was carried out as German naval commanders feared any invasion force might be cut off by British naval forces, particularly Royal Navy submarines.
The island was the starting point for one of the earlier Operation Pluto pipelines to feed fuel to Europe after the Normandy landings.
The Needles Battery was used to develop and test the Black Arrow and Black Knight space rockets, which were subsequently launched from Woomera, Australia.
The Isle of Wight Festival was a large rock festival near Afton Down, West Wight, in August 1970, following two smaller concerts in 1968 and 1969. The 1970 show was one of the last public performances by Jimi Hendrix and attracted somewhere between 600,000 and 700,000 attendees. The festival was revived in 2002 in a different format and is now an annual event.
On 26 October 2020, an oil tanker, the Nave Andromeda, suspected to have been hijacked by Nigerian stowaways, was stormed southeast of the island by the Special Boat Service. Seven people believed to be Nigerians seeking UK asylum were handed over to Hampshire Police.
The island has a single Member of Parliament. The Isle of Wight constituency covers the entire island, with 138,300 permanent residents in 2011, being one of the most populated constituencies in the United Kingdom (more than 50% above the English average). In 2011 following passage of the Parliamentary Voting System and Constituencies Act, the Sixth Periodic Review of Westminster constituencies was to have changed this, but this was deferred to no earlier than October 2022 by the Electoral Registration and Administration Act 2013. Thus the single constituency remained for the 2015, 2017 and 2019 general elections. However, two separate East and West constituencies are proposed for the island under the 2022 review now under way.
The Isle of Wight is a ceremonial and non-metropolitan county. Since the abolition of its two borough councils and restructuring of the Isle of Wight County Council into the new Isle of Wight Council in 1995, it has been administered by a single tier Island Council which has the same powers as a unitary authority in England.
Elections in the constituency have traditionally been a battle between the Conservatives and the Liberal Democrats. Andrew Turner of the Conservative Party gained the seat from Peter Brand of the Lib Dems at the 2001 general election. Since 2009, Turner was embroiled in controversy over his expenses, health, and relationships with colleagues, with local Conservatives having tried but failed to remove him in the runup to the 2015 general election. He stood down prior to the 2017 snap general election, and the new Conservative Party candidate Bob Seely was elected with a majority of 21,069 votes.
At the Isle of Wight Council election of 2013, the Conservatives lost the majority which they had held since 2005 to the Island Independents, with Island Independent councillors holding 16 of the 40 seats, and a further five councillors sitting as independents outside the group. The Conservatives regained control, winning 10 more seats and taking their total to 25 at the 2017 local election, before losing 7 seats in 2021. A coalition entitled the Alliance Coalition was formed between independent, Green Party and Our Island councillors, with independent councillor Lora Peacey-Wilcox leading the council since May 2021.
There have been small regionalist movements: the Vectis National Party and the Isle of Wight Party; but they have attracted little support at elections.
The Isle of Wight is situated between the Solent and the English Channel, is roughly rhomboid in shape, and covers an area of 150 sq mi (380 km). Slightly more than half, mainly in the west, is designated as the Isle of Wight Area of Outstanding Natural Beauty. The island has 100 sq mi (258 km) of farmland, 20 sq mi (52 km) of developed areas, and 57 miles (92 km) of coastline. Its landscapes are diverse, leading to its oft-quoted description as "England in miniature". In June 2019 the whole island was designated a UNESCO Biosphere Reserve, recognising the sustainable relationships between its residents and the local environment.
West Wight is predominantly rural, with dramatic coastlines dominated by the chalk downland ridge, running across the whole island and ending in the Needles stacks. The southwestern quarter is commonly referred to as the Back of the Wight, and has a unique character. The highest point on the island is St Boniface Down in the south east, which at 241 m (791 ft) is a marilyn. The most notable habitats on the rest of the island are probably the soft cliffs and sea ledges, which are scenic features, important for wildlife, and internationally protected.
The island has three principal rivers. The River Medina flows north into the Solent, the Eastern Yar flows roughly northeast to Bembridge Harbour, and the Western Yar flows the short distance from Freshwater Bay to a relatively large estuary at Yarmouth. Without human intervention the sea might well have split the island into three: at the west end where a bank of pebbles separates Freshwater Bay from the marshy backwaters of the Western Yar east of Freshwater, and at the east end where a thin strip of land separates Sandown Bay from the marshy Eastern Yar basin.
The Undercliff between St Catherine's Point and Bonchurch is the largest area of landslip morphology in western Europe.
The north coast is unusual in having four high tides each day, with a double high tide every twelve and a half hours. This arises because the western Solent is narrower than the eastern; the initial tide of water flowing from the west starts to ebb before the stronger flow around the south of the island returns through the eastern Solent to create a second high water.
The Isle of Wight is made up of a variety of rock types dating from early Cretaceous (around 127 million years ago) to the middle of the Palaeogene (around 30 million years ago). The geological structure is dominated by a large monocline which causes a marked change in age of strata from the northern younger Tertiary beds to the older Cretaceous beds of the south. This gives rise to a dip of almost 90 degrees in the chalk beds, seen best at the Needles.
The northern half of the island is mainly composed of clays, with the southern half formed of the chalk of the central east–west downs, as well as Upper and Lower Greensands and Wealden strata. These strata continue west from the island across the Solent into Dorset, forming the basin of Poole Harbour (Tertiary) and the Isle of Purbeck (Cretaceous) respectively. The chalky ridges of Wight and Purbeck were a single formation before they were breached by waters from the River Frome during the last ice age, forming the Solent and turning Wight into an island. The Needles, along with Old Harry Rocks on Purbeck, represent the edges of this breach.
All the rocks found on the island are sedimentary, such as limestones, mudstones and sandstones. They are rich in fossils; many can be seen exposed on beaches as the cliffs erode. Lignitic coal is present in small quantities within seams, and can be seen on the cliffs and shore at Whitecliff Bay. Fossilised molluscs have been found there, and also on the northern coast along with fossilised crocodiles, turtles and mammal bones; the youngest date back to around 30 million years ago.
The island is one of the most important areas in Europe for dinosaur fossils. The eroding cliffs often reveal previously hidden remains, particularly along the Back of the Wight. Dinosaur bones and fossilised footprints can be seen in and on the rocks exposed around the island's beaches, especially at Yaverland and Compton Bay, from the strata of the Wessex Formation. As a result, the island has been nicknamed "Dinosaur Island" and Dinosaur Isle was established in 2001.
The area was affected by sea level changes during the repeated Quaternary glaciations. The island probably became separated from the mainland about 125,000 years ago, during the Ipswichian interglacial.
Like the rest of the UK, the island has an oceanic climate, but is somewhat milder and sunnier, which makes it a holiday destination. It also has a longer growing season. Lower Ventnor and the neighbouring Undercliff have a particular microclimate, because of their sheltered position south of the downs. The island enjoys 1,800–2,100 hours of sunshine a year. Some years have almost no snow in winter, and only a few days of hard frost. The island is in Hardiness zone 9.
The Isle of Wight is one of the few places in England where the red squirrel is still flourishing; no grey squirrels are to be found. There are occasional sightings of wild deer, and there is a colony of wild goats on Ventnor's downs. Protected species such as the dormouse and rare bats can be found. The island is home to a population of European hedgehogs, and a rescue organisation devoted to them, Save Our Hedgehogs Isle of Wight, was founded in 2019. The Glanville fritillary butterfly's distribution in the United Kingdom is largely restricted to the edges of the island's crumbling cliffs.
A competition in 2002 named the pyramidal orchid as the Isle of Wight's county flower.
The table below shows the regional gross value (in millions of pounds) added by the Isle of Wight economy, at current prices, compiled by the Office for National Statistics.
According to the 2011 census, the island's population of 138,625 lives in 61,085 households, giving an average household size of 2.27 people.
41% of households own their home outright and a further 29% own with a mortgage, so in total 70% of households are owned (compared to 68% for South East England).
Compared to South East England, the island has fewer children (19% aged 0–17 compared to 22% for the South East) and more elderly (24% aged 65+ compared to 16% for the South East), giving an average age of 44 years for an island resident compared to 40 in South East England.
The largest industry on the island is tourism, but it also has a significant agriculture including sheep, dairy farming and arable crops. Traditional agricultural commodities are more difficult to market off the island because of transport costs, but local farmers have succeeded in exploiting some specialist markets, with the higher price of such products absorbing the transport costs. One of the most successful agricultural sectors is now the growing of crops under cover, particularly salad crops including tomatoes and cucumbers. The island has a warmer climate and a longer growing season than much of the United Kingdom. Garlic has been grown in Newchurch for many years, and is, in part, exported to France. This has led to the establishment of an annual Garlic Festival at Newchurch, which is one of the largest events of the local calendar.
A favourable climate supports two vineyards, including one of the oldest in the British Isles at Adgestone. Lavender is grown for its oil. The largest agricultural sector has been dairying, but due to low milk prices and strict legislation for UK milk producers, the dairy industry has been in decline: there were nearly 150 producers in the mid-1980s, but now just 24.
Maritime industries, especially the making of sailcloth and boat building, have long been associated with the island, although this has diminished in recent years. GKN operates what began as the British Hovercraft Corporation, a subsidiary of (and known latterly as) Westland Aircraft, although they have reduced the extent of plant and workforce and sold the main site. Previously it had been the independent company Saunders-Roe, one of the island's most notable historic firms that produced many flying boats and the world's first hovercraft.
Another manufacturing activity is in composite materials, used by boat-builders and the wind turbine manufacturer Vestas, which has a wind turbine blade factory and testing facilities in West Medina Mills and East Cowes.
Bembridge Airfield is the home of Britten-Norman, manufacturers of the Islander and Trislander aircraft. This is shortly to become the site of the European assembly line for Cirrus light aircraft. The Norman Aeroplane Company is a smaller aircraft manufacturing company operating in Sandown. There have been three other firms that built planes on the island.
In 2005, Northern Petroleum began exploratory drilling for oil at its Sandhills-2 borehole at Porchfield, but ceased operations in October that year after failing to find significant reserves.
There are three breweries on the island. Goddards Brewery in Ryde opened in 1993. David Yates, who was head brewer of the Island Brewery, started brewing as Yates Brewery at the Inn at St Lawrence in 2000. Ventnor Brewery, which closed in 2009, was the last incarnation of Burt's Brewery, brewing since the 1840s in Ventnor. Until the 1960s most pubs were owned by Mews Brewery, situated in Newport near the old railway station, but it closed and the pubs were taken over by Strong's, and then by Whitbread. By some accounts Mews beer was apt to be rather cloudy and dark. In the 19th century they pioneered the use of screw top cans for export to British India.
The island's heritage is a major asset that has for many years supported its tourist economy. Holidays focused on natural heritage, including wildlife and geology, are becoming an alternative to the traditional British seaside holiday, which went into decline in the second half of the 20th century due to the increased affordability of foreign holidays. The island is still an important destination for coach tours from other parts of the United Kingdom.
Tourism is still the largest industry, and most island towns and villages offer hotels, hostels and camping sites. In 1999, it hosted 2.7 million visitors, with 1.5 million staying overnight, and 1.2 million day visits; only 150,000 of these were from abroad. Between 1993 and 2000, visits increased at an average rate of 3% per year.
At the turn of the 19th century the island had ten pleasure piers, including two at Ryde and a "chain pier" at Seaview. The Victoria Pier in Cowes succeeded the earlier Royal Pier but was itself removed in 1960. The piers at Ryde, Seaview, Sandown, Shanklin and Ventnor originally served a coastal steamer service that operated from Southsea on the mainland. The piers at Seaview, Shanklin, Ventnor and Alum Bay were all destroyed by various storms during the 20th century; only the railway pier at Ryde and the piers at Sandown, Totland Bay (currently closed to the public) and Yarmouth survive.
Blackgang Chine is the oldest theme park in Britain, opened in 1843. The skeleton of a dead whale that its founder Alexander Dabell found in 1844 is still on display.
As well as its more traditional attractions, the island is often host to walking or cycling holidays through the attractive scenery. An annual walking festival has attracted considerable interest. The 70 miles (113 km) Isle of Wight Coastal Path follows the coastline as far as possible, deviating onto roads where the route along the coast is impassable.
The tourist board for the island is Visit Isle of Wight, a non-profit company. It is the Destination Management Organisation for the Isle of Wight, a public and private sector partnership led by the private sector, and consists of over 1,200 companies, including the ferry operators, the local bus company, rail operator and tourism providers working together to collectively promote the island. Its income is derived from the Wight BID, a business improvement district levy fund.
A major contributor to the local economy is sailing and marine-related tourism.
Summer Camp at Camp Beaumont is an attraction at the old Bembridge School site.
The main local newspaper used to be the Isle of Wight County Press, but its circulation has declined over the years, especially since it was taken over by Newsquest in July 2017. In 2018 a new free newspaper was launched, the Isle of Wight Observer. By 2023 the newcomer was distributing 18,500 copies compared to the Isle of Wight County Press's total circulation of 11,575. The Island's leading news website, Island Echo, was launched in May 2012 and now publishes in excess of 5,000 news articles a year. Other online news sources for the Isle of Wight include On the Wight.
The island has a local commercial radio station and a community radio station: commercial station Isle of Wight Radio has broadcast in the medium-wave band since 1990 and on 107.0 MHz (with three smaller transmitters on 102.0 MHz) FM since 1998, as well as streaming on the Internet. Community station Vectis Radio has broadcast online since 2010, and in 2017 started broadcasting on FM 104.6. The station operates from the Riverside Centre in Newport. The island is also covered by a number of local stations on the mainland, including the BBC station BBC Radio Solent broadcast from Southampton. The island's not-for-profit community radio station Angel Radio opened in 2007. Angel Radio began broadcasting on 91.5 MHz from studios in Cowes and a transmitter near Newport.
Important broadcasting infrastructure includes Chillerton Down transmitting station with a mast that is the tallest structure on the island, and Rowridge transmitting station, which broadcasts the main television signal both locally and for most of Hampshire and parts of Dorset and West Sussex.
The local accent is similar to the traditional dialect of Hampshire, featuring the dropping of some consonants and an emphasis on longer vowels. It is similar to the West Country dialects heard in South West England, but less pronounced.
The island has its own local and regional words. Some, such as nipper/nips (a young male person), are still sometimes used and shared with neighbouring areas of the mainland. A few are unique to the island, for example overner and caulkhead (see below). Others are more obscure and now used mainly for comic emphasis, such as mallishag (meaning "caterpillar"), gurt meaning "large", nammit (a mid-morning snack) and gallybagger ("scarecrow", and now the name of a local cheese).
There remains occasional confusion between the Isle of Wight as a county and its former position within Hampshire. The island was regarded and administered as a part of Hampshire until 1890, when its distinct identity was recognised with the formation of Isle of Wight County Council (see also Politics of the Isle of Wight). However, it remained a part of Hampshire until the local government reforms of 1974, when it became a full ceremonial county with its own Lord Lieutenant.
In January 2009, the first general flag for the county was accepted by the Flag Institute.
Island residents are sometimes referred to as "Vectensians", "Vectians" or, if born on the island, "caulkheads". One theory is that this last comes from the once prevalent local industry of caulking or sealing wooden boats; the term became attached to islanders either because they were so employed, or as a derisory term for perceived unintelligent labourers from elsewhere. The term "overner" is used for island residents originating from the mainland (an abbreviated form of "overlander", which is an archaic term for "outsider" still found in parts of Australia).
Residents refer to the island as "The Island", as did Jane Austen in Mansfield Park, and sometimes to the UK mainland as "North Island".
To promote the island's identity and culture, the High Sheriff, Robin Courage, founded an Isle of Wight Day; the first was held on Saturday 24 September 2016.
Sport plays a key part of culture on the Isle of Wight. Sports include golf, marathon, cycling and sailing.
The island is home to the Isle of Wight Festival and until 2016, Bestival, before it was relocated to Lulworth Estate in Dorset. In 1970, the festival was headlined by Jimi Hendrix attracting an audience of 600,000, some six times the local population at the time. It is the home of the bands The Bees, Trixie's Big Red Motorbike, Level 42, and Wet Leg.
The Isle of Wight has 489 miles (787 km) of roadway. It does not have a motorway, although there is a short stretch of dual carriageway towards the north of Newport near the hospital and prison.
A comprehensive bus network operated by Southern Vectis links most settlements, with Newport as its central hub.
Journeys away from the island involve a ferry journey. Car ferry and passenger catamaran services are run by Wightlink and Red Funnel, and a hovercraft passenger service (the only such remaining in the world) by Hovertravel.
The island formerly had its own railway network of over 55 miles (89 km), but only one line remains in regular use. The Island Line is part of the United Kingdom's National Rail network, running a little under 9 miles (14 km) from Shanklin to Ryde Pier Head, where there is a connecting ferry service to Portsmouth Harbour station on the mainland network. The line was opened by the Isle of Wight Railway in 1864, and from 1996 to 2007 was run by the smallest train operating company on the network, Island Line Trains. It is notable for utilising old ex-London Underground rolling stock, due to the small size of its tunnels and unmodernised signalling. Branching off the Island Line at Smallbrook Junction is the heritage Isle of Wight Steam Railway, which runs for 5+1⁄2 miles (8.9 km) to the outskirts of Wootton on the former line to Newport.
There are two airfields for general aviation, Isle of Wight Airport at Sandown and Bembridge Airport.
The island has over 200 miles (322 km) of cycleways, many of which can be enjoyed off-road. The principal trails are:
The Isle of Wight is near the densely populated south of England, yet separated from the mainland. This position led to it hosting three prisons: Albany, Camp Hill and Parkhurst, all located outside Newport near the main road to Cowes. Albany and Parkhurst were among the few Category A prisons in the UK until they were downgraded in the 1990s. The downgrading of Parkhurst was precipitated by a major escape: three prisoners (two murderers and a blackmailer) escaped from the prison on 3 January 1995 for four days, before being recaptured. Parkhurst enjoyed notoriety as one of the toughest jails in the United Kingdom, and housed many notable inmates including the Yorkshire Ripper Peter Sutcliffe, New Zealand drug lord Terry Clark and the Kray twins.
Camp Hill is located adjacent but to the west of Albany and Parkhurst, on the very edge of Parkhurst Forest, having been converted first to a borstal and later to a Category C prison. It was built on the site of an army camp (both Albany and Parkhurst were barracks); there is a small estate of tree-lined roads with the former officers' quarters (now privately owned) to the south and east. Camp Hill closed as a prison in March 2013.
The management of all three prisons was merged into a single administration, under HMP Isle of Wight in April 2009.
There are 69 local education authority-maintained schools on the Isle of Wight, and two independent schools. As a rural community, many of these are small and with fewer pupils than in urban areas. The Isle of Wight College is located on the outskirts of Newport.
From September 2010, there was a transition period from the three-tier system of primary, middle and high schools to the two-tier system that is usual in England. Some schools have now closed, such as Chale C.E. Primary. Others have become "federated", such as Brading C.E. Primary and St Helen's Primary. Christ the King College started as two "middle schools", Trinity Middle School and Archbishop King Catholic Middle School, but has now been converted into a dual-faith secondary school and sixth form.
Since September 2011 five new secondary schools, with an age range of 11 to 18 years, replaced the island's high schools (as a part of the previous three-tier system).
Notable residents have included:
The Isle of Wight has given names to many parts of former colonies, most notably Isle of Wight County in Virginia founded by settlers from the island in the 17th century. Its county seat is a town named Isle of Wight.
Other notable examples include: | [
{
"paragraph_id": 0,
"text": "The Isle of Wight (/waɪt/ WYTE) is an island, English county and unitary authority in the English Channel, 2 to 5 miles (3.2 to 8.0 kilometres) off the coast of Hampshire, across the Solent. It is the largest and second-most populous island in England. Referred to as \"The Island\" by residents, the Isle of Wight has resorts that have been popular holiday destinations since Victorian times. It is known for its mild climate, coastal scenery, and verdant landscape of fields, downland, and chines. The island is historically part of Hampshire. The island is designated a UNESCO Biosphere Reserve.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The island has been home to the poets Algernon Charles Swinburne and Alfred, Lord Tennyson. Queen Victoria built her summer residence and final home, Osborne House at East Cowes on the Isle. It has a maritime and industrial tradition of boat-building, sail-making, the manufacture of flying boats, hovercraft, and Britain's space rockets. The island hosts annual music festivals, including the Isle of Wight Festival, which in 1970 was the largest rock music event ever held. It has well-conserved wildlife and some of Europe's richest cliffs and quarries of dinosaur fossils.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The island has played an essential part in the defence of the ports of Southampton and Portsmouth and has been near the front line of conflicts through the ages, having faced the Spanish Armada and weathered the Battle of Britain. Being rural for most of its history, its Victorian fashionability and the growing affordability of holidays led to significant urban development during the late 19th and early 20th centuries.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The island became a separate administrative county in 1890, independent of Hampshire. It continued to share the Lord Lieutenant of Hampshire until 1974, when it was made a ceremonial county in its own right. The island no longer has administrative links to Hampshire. However, the two counties share their police force and fire and rescue service, and the island's Anglican churches belong to the Diocese of Portsmouth (originally Winchester). A combined local authority with Portsmouth and Southampton was considered as part of a regional devolution package but was subsequently rejected by the UK government in 2018.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The quickest public transport link to the mainland is the hovercraft (Hovertravel) from Ryde to Southsea. Three vehicle ferries and two catamaran services cross the Solent to Southampton, Lymington, and Portsmouth via the island's largest ferry operator, Wightlink, and the island's second-largest ferry company, Red Funnel. Tourism is the largest industry on the island.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The oldest records that give a name for the Isle of Wight are from the Roman Empire. It was called Vectis or Vecta in Latin and Iktis or Ouiktis in Greek. Latin Vecta, Old English Wiht, and Old Welsh Gueid and Guith were recorded from the Anglo-Saxon period. The Domesday Book called the island Wit. The modern Welsh name is Ynys Wyth (ynys meaning island). These are all variants of the same name, possibly Celtic in origin.",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "Inhabitants of the Isle of Wight were known as Wihtware.",
"title": "Name"
},
{
"paragraph_id": 7,
"text": "During the Pleistocene glacial periods, sea levels were lower, and the present-day Solent was part of the valley of the Solent River. The river flowed eastward from Dorset, following the course of the modern Solent strait, before travelling south and southwest towards the major Channel River system. At these times, extensive gravel terraces associated with the Solent River and the forerunners of the island's modern rivers were deposited. During warmer interglacial periods, silts, beach gravels, clays, and muds of marine and estuarine origin were deposited due to higher sea levels, similar to those experienced today.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The earliest clear evidence of Lower Palaeolithic archaic human occupation on what is now the Isle of Wight is found close to Priory Bay. More than 300 acheulean handaxes have been recovered from the beach and cliff slopes, originating from a sequence of Pleistocene gravels dating approximately to MIS 11-MIS 9 (424,000–374,000 years ago). Reworked and abraded artefacts found at the site may be considerably older, however, closer to 500,000 years old. The identity of the hominids who produced these tools is unknown. However, sites and fossils of the same age range in Europe are often attributed to Homo heidelbergensis or early populations of Neanderthals.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "A Middle Palaeolithic Mousterian flint assemblage, consisting of 50 handaxes and debitage, has been recovered from Great Pan Farm in the Medina Valley near Newport. Gravel sequences at the site have been dated to the MIS 3 interstadial during the last glacial period (c. 50,000 years ago). These tools are associated with the late Neanderthal occupation, and evidence of late Neanderthal presence is seen across Britain at this time.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "No significant evidence of Upper Palaeolithic activity exists on the Isle of Wight. This period is associated with the expansion and establishment of populations of modern human (Homo sapiens) hunter-gatherers in Europe, beginning around 45,000 years ago. However, evidence of late Upper Palaeolithic activity has been found at nearby sites on the mainland, notably Hengistbury Head in Dorset, dating to just before the onset of the Holocene and the end of the last glacial period.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "A submerged escarpment 11m below sea level off Bouldnor Cliff on the island's northwest coastline is home to an internationally significant mesolithic archaeological site. The site has yielded evidence of seasonal occupation by Mesolithic hunter-gatherers dating to c. 6050 BC. Finds include flint tools, burnt flint, worked timbers, wooden platforms, and pits. The worked wood shows evidence of splitting large planks from oak trunks, interpreted as being intended for use as dug-out canoes. DNA analysis of sediments at the site yielded wheat DNA, not found in Britain until the Neolithic 2,000 years after the occupation at Bouldnor Cliff. It has been suggested this is evidence of wide-reaching trade in Mesolithic Europe; however, the contemporaneity of the wheat with the Mesolithic occupation has been contested. When hunter-gatherers used the site, it was located on a river bank surrounded by wetlands and woodland. As sea levels rose throughout the Holocene, the river valley slowly flooded, submerging the site.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Evidence of Mesolithic occupation on the island is generally found along the river valleys, particularly along the north of the island and in the former catchment of the western Yar. Other key sites are found at Newtown Creek, Werrar, and Wootton-Quarr.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Flint tools and monuments attest to neolithic occupation on the Isle of Wight. Unlike the previous Mesolithic hunter-gatherer population, Neolithic communities on the Isle of Wight were based on farming and linked to a migration of Neolithic populations from France and northwest Europe to Britain c. 6,000 years ago.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Isle of Wight's most visible Neolithic site is the Longstone at Mottistone, the remains of an early Neolithic long barrow. Constructed initially with two standing stones at the entrance, only one remains today. A Neolithic mortuary enclosure has been identified on Tennyson Down near Freshwater.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Bronze Age Britain had large tin reserves in Cornwall and Devon areas, which was necessary to smelt bronze. At that time, the sea level was much lower, and carts of tin were brought across the Solent at low tide for export, possibly on the Ferriby Boats. Anthony Snodgrass suggests that a shortage of tin, as a part of the Bronze Age Collapse and trade disruptions in the Mediterranean around 1300 BC, forced metalworkers to seek an alternative to bronze. From the 7th century BC, during the Late Iron Age, the Isle of Wight, like the rest of Great Britain, was occupied by the Celtic Britons, in the form of the Durotriges tribe, as attested by finds of their coins, for example, the South Wight Hoard, and the Shalfleet Hoard. The island was known as Ynys Weith in Brittonic Celtic. Southeastern Britain experienced significant immigration, which is reflected in the current residents' genetic makeup. As the Iron Age began, tin value likely dropped sharply, greatly changing the Isle of Wight's economy. Trade, however, continued, as evidenced by the local abundance of European Iron Age coins.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Julius Caesar reported that the Belgae took the Isle of Wight in about 85 BC and recognised the culture of this general region as \"Belgic\" but made no reference to Vectis. The Roman historian Suetonius mentions that the island was captured by the commander Vespasian. The Romans built no towns on the island, but the remains of at least seven Roman villas have been found, indicating the prosperity of local agriculture. First-century exports were principally hides, enslaved people, hunting dogs, grain, cattle, silver, gold, and iron.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "There are indications that the island had vast trading links, with a port at Bouldnor, evidence of Bronze Age tin trading, and finds of Late Iron Age coins. Starting in AD 449, the 5th and 6th centuries saw groups of Germanic-speaking peoples from Northern Europe crossing the English Channel and gradually set about conquering the region.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "During the Early Middle Ages, the island was settled by Jutes as the pagan kingdom of the Wihtwara under King Arwald. In 685, it was invaded by King Cædwalla of Wessex, who tried to replace the inhabitants with his followers. Though in 686, Arwald was defeated, and the island became the last part of English lands to be converted to Christianity, Cædwalla was unsuccessful in driving the Jutes from the island. Wight was added to Wessex and became part of England under King Alfred the Great, including within the shire of Hampshire.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "It suffered especially from Viking raids and was often used as a winter base by Viking raiders when they could not reach Normandy. Later, both Earl Tostig and his brother Harold Godwinson (who became King Harold II) held manors on the island.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The Norman Conquest of 1066 created the position of Lord of the Isle of Wight; the island was given by William the Conqueror to his kinsman William FitzOsbern. Carisbrooke Priory and the fort of Carisbrooke Castle were then founded. Allegiance was sworn to FitzOsbern rather than the king; the Lordship was subsequently granted to the de Redvers family by Henry I after his succession in 1100.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "For nearly 200 years the island was a semi-independent feudal fiefdom, with the de Redvers family ruling from Carisbrooke. The final private owner was the Countess Isabella de Fortibus, who, on her deathbed in 1293, was persuaded to sell it to Edward I. Subsequently, the island was under the control of the English Crown and its Lordship a royal appointment.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The island continued to be attacked from the continent: it was raided in 1374 by the fleet of Castile and in 1377 by French raiders who burned several towns, including Newtown.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Under Henry VIII, who developed the Royal Navy and its Portsmouth base, the island was fortified at Yarmouth, Cowes, East Cowes, and Sandown.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The French invasion on 21 July 1545 (famous for the sinking of the Mary Rose on the 19th) was repulsed by local militia.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "During the English Civil War, King Charles I fled to the Isle of Wight, believing he would receive sympathy from Governor Robert Hammond. Still, Hammond imprisoned the king in Carisbrooke Castle.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "During the Seven Years' War, the island was a staging post for British troops departing on expeditions against the French coast, such as the Raid on Rochefort. During 1759, with a planned French invasion imminent, a large force of soldiers was stationed there. The French called off their invasion following the Battle of Quiberon Bay.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In the spring of 1817, the twenty-one year old John Keats spent time in Carisbrooke and Shanklin, where he found inspiration in the countryside and coast, and worked on his long poem Endymion.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In the mid-1840s, potato blight was first found in the UK on the island, having arrived from Belgium. It was later transmitted to Ireland.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In the 1860s, what remains in real terms the most expensive ever government spending project saw fortifications built on the island and in the Solent, as well as elsewhere along the south coast, including the Palmerston Forts, The Needles Batteries, and Fort Victoria, because of fears about possible French invasion.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The future Queen Victoria spent childhood holidays on the island and became fond of it. When she became queen, she made Osborne House her winter home. Subsequently, the island became a fashionable holiday resort for many, including Alfred, Lord Tennyson, Julia Margaret Cameron, and Charles Dickens (who wrote much of David Copperfield there), as well as the French painter Berthe Morisot and members of European royalty.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Until the queen's example, the island had been rural, with most people employed in farming, fishing, or boat-building. The boom in tourism, spurred by growing wealth and leisure time and by Victoria's presence, led to the significant urban development of the island's coastal resorts. As one report summarises, \"The Queen's regular presence on the island helped put the Isle of Wight 'on the map' as a Victorian holiday and wellness destination ... and her former residence Osborne House is now one of the most visited attractions on the island.\" While on the island, the queen used a bathing machine that could be wheeled into the water on Osborne Beach; inside the small wooden hut, she could undress and then bathe, without being visible to others. Her machine had a changing room and a WC with plumbing. The refurbished machine is now displayed at the beach.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "On 14 January 1878, Alexander Graham Bell demonstrated an early version of the telephone to the queen, placing calls to Cowes, Southampton, and London. These were the first publicly-witnessed long-distance telephone calls in the UK. The queen tried the device and considered the process to be \"quite extraordinary\" although the sound was \"rather faint\". She later asked to buy the equipment that was used, but Bell offered to make \"a set of telephones\" specifically for her.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The world's first radio station was set up by Guglielmo Marconi in 1897, during her reign, at the Needles Battery, at the western tip of the island. A 168-foot (51 m) high mast was erected near the Royal Needles Hotel as part of an experiment on communicating with ships at sea. That location is now the site of the Marconi Monument. In 1898 the first paid wireless telegram (called a \"Marconigram\") was sent from this station, and the island was for some time the home of the National Wireless Museum near Ryde.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Queen Victoria died at Osborne House on 22 January 1901 at 81.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "During the Second World War, the island was frequently bombed. With its proximity to German-occupied France, the island hosted observation stations, transmitters, and the RAF radar station at Ventnor. Adolf Hitler personally suggested an invasion of the Isle of Wight as a supplementary operation for Operation Sealion, and the possibility of an invasion was incorporated into Fuhrer Directive 16. Field Marshal Alan Brooke, in charge of defending the UK during 1940, was sceptical about being able to hold the island in the face of an invasion, instead considering that British forces would retreat to the western side of the island rather than commit forces against what might be a diversionary landing. In the end no invasion of the island was carried out as German naval commanders feared any invasion force might be cut off by British naval forces, particularly Royal Navy submarines.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The island was the starting point for one of the earlier Operation Pluto pipelines to feed fuel to Europe after the Normandy landings.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The Needles Battery was used to develop and test the Black Arrow and Black Knight space rockets, which were subsequently launched from Woomera, Australia.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The Isle of Wight Festival was a large rock festival near Afton Down, West Wight, in August 1970, following two smaller concerts in 1968 and 1969. The 1970 show was one of the last public performances by Jimi Hendrix and attracted somewhere between 600,000 and 700,000 attendees. The festival was revived in 2002 in a different format and is now an annual event.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "On 26 October 2020, an oil tanker, the Nave Andromeda, suspected to have been hijacked by Nigerian stowaways, was stormed southeast of the island by the Special Boat Service. Seven people believed to be Nigerians seeking UK asylum were handed over to Hampshire Police.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "The island has a single Member of Parliament. The Isle of Wight constituency covers the entire island, with 138,300 permanent residents in 2011, being one of the most populated constituencies in the United Kingdom (more than 50% above the English average). In 2011 following passage of the Parliamentary Voting System and Constituencies Act, the Sixth Periodic Review of Westminster constituencies was to have changed this, but this was deferred to no earlier than October 2022 by the Electoral Registration and Administration Act 2013. Thus the single constituency remained for the 2015, 2017 and 2019 general elections. However, two separate East and West constituencies are proposed for the island under the 2022 review now under way.",
"title": "Governance"
},
{
"paragraph_id": 41,
"text": "The Isle of Wight is a ceremonial and non-metropolitan county. Since the abolition of its two borough councils and restructuring of the Isle of Wight County Council into the new Isle of Wight Council in 1995, it has been administered by a single tier Island Council which has the same powers as a unitary authority in England.",
"title": "Governance"
},
{
"paragraph_id": 42,
"text": "Elections in the constituency have traditionally been a battle between the Conservatives and the Liberal Democrats. Andrew Turner of the Conservative Party gained the seat from Peter Brand of the Lib Dems at the 2001 general election. Since 2009, Turner was embroiled in controversy over his expenses, health, and relationships with colleagues, with local Conservatives having tried but failed to remove him in the runup to the 2015 general election. He stood down prior to the 2017 snap general election, and the new Conservative Party candidate Bob Seely was elected with a majority of 21,069 votes.",
"title": "Governance"
},
{
"paragraph_id": 43,
"text": "At the Isle of Wight Council election of 2013, the Conservatives lost the majority which they had held since 2005 to the Island Independents, with Island Independent councillors holding 16 of the 40 seats, and a further five councillors sitting as independents outside the group. The Conservatives regained control, winning 10 more seats and taking their total to 25 at the 2017 local election, before losing 7 seats in 2021. A coalition entitled the Alliance Coalition was formed between independent, Green Party and Our Island councillors, with independent councillor Lora Peacey-Wilcox leading the council since May 2021.",
"title": "Governance"
},
{
"paragraph_id": 44,
"text": "There have been small regionalist movements: the Vectis National Party and the Isle of Wight Party; but they have attracted little support at elections.",
"title": "Governance"
},
{
"paragraph_id": 45,
"text": "The Isle of Wight is situated between the Solent and the English Channel, is roughly rhomboid in shape, and covers an area of 150 sq mi (380 km). Slightly more than half, mainly in the west, is designated as the Isle of Wight Area of Outstanding Natural Beauty. The island has 100 sq mi (258 km) of farmland, 20 sq mi (52 km) of developed areas, and 57 miles (92 km) of coastline. Its landscapes are diverse, leading to its oft-quoted description as \"England in miniature\". In June 2019 the whole island was designated a UNESCO Biosphere Reserve, recognising the sustainable relationships between its residents and the local environment.",
"title": "Geography"
},
{
"paragraph_id": 46,
"text": "West Wight is predominantly rural, with dramatic coastlines dominated by the chalk downland ridge, running across the whole island and ending in the Needles stacks. The southwestern quarter is commonly referred to as the Back of the Wight, and has a unique character. The highest point on the island is St Boniface Down in the south east, which at 241 m (791 ft) is a marilyn. The most notable habitats on the rest of the island are probably the soft cliffs and sea ledges, which are scenic features, important for wildlife, and internationally protected.",
"title": "Geography"
},
{
"paragraph_id": 47,
"text": "The island has three principal rivers. The River Medina flows north into the Solent, the Eastern Yar flows roughly northeast to Bembridge Harbour, and the Western Yar flows the short distance from Freshwater Bay to a relatively large estuary at Yarmouth. Without human intervention the sea might well have split the island into three: at the west end where a bank of pebbles separates Freshwater Bay from the marshy backwaters of the Western Yar east of Freshwater, and at the east end where a thin strip of land separates Sandown Bay from the marshy Eastern Yar basin.",
"title": "Geography"
},
{
"paragraph_id": 48,
"text": "The Undercliff between St Catherine's Point and Bonchurch is the largest area of landslip morphology in western Europe.",
"title": "Geography"
},
{
"paragraph_id": 49,
"text": "The north coast is unusual in having four high tides each day, with a double high tide every twelve and a half hours. This arises because the western Solent is narrower than the eastern; the initial tide of water flowing from the west starts to ebb before the stronger flow around the south of the island returns through the eastern Solent to create a second high water.",
"title": "Geography"
},
{
"paragraph_id": 50,
"text": "The Isle of Wight is made up of a variety of rock types dating from early Cretaceous (around 127 million years ago) to the middle of the Palaeogene (around 30 million years ago). The geological structure is dominated by a large monocline which causes a marked change in age of strata from the northern younger Tertiary beds to the older Cretaceous beds of the south. This gives rise to a dip of almost 90 degrees in the chalk beds, seen best at the Needles.",
"title": "Geography"
},
{
"paragraph_id": 51,
"text": "The northern half of the island is mainly composed of clays, with the southern half formed of the chalk of the central east–west downs, as well as Upper and Lower Greensands and Wealden strata. These strata continue west from the island across the Solent into Dorset, forming the basin of Poole Harbour (Tertiary) and the Isle of Purbeck (Cretaceous) respectively. The chalky ridges of Wight and Purbeck were a single formation before they were breached by waters from the River Frome during the last ice age, forming the Solent and turning Wight into an island. The Needles, along with Old Harry Rocks on Purbeck, represent the edges of this breach.",
"title": "Geography"
},
{
"paragraph_id": 52,
"text": "All the rocks found on the island are sedimentary, such as limestones, mudstones and sandstones. They are rich in fossils; many can be seen exposed on beaches as the cliffs erode. Lignitic coal is present in small quantities within seams, and can be seen on the cliffs and shore at Whitecliff Bay. Fossilised molluscs have been found there, and also on the northern coast along with fossilised crocodiles, turtles and mammal bones; the youngest date back to around 30 million years ago.",
"title": "Geography"
},
{
"paragraph_id": 53,
"text": "The island is one of the most important areas in Europe for dinosaur fossils. The eroding cliffs often reveal previously hidden remains, particularly along the Back of the Wight. Dinosaur bones and fossilised footprints can be seen in and on the rocks exposed around the island's beaches, especially at Yaverland and Compton Bay, from the strata of the Wessex Formation. As a result, the island has been nicknamed \"Dinosaur Island\" and Dinosaur Isle was established in 2001.",
"title": "Geography"
},
{
"paragraph_id": 54,
"text": "The area was affected by sea level changes during the repeated Quaternary glaciations. The island probably became separated from the mainland about 125,000 years ago, during the Ipswichian interglacial.",
"title": "Geography"
},
{
"paragraph_id": 55,
"text": "Like the rest of the UK, the island has an oceanic climate, but is somewhat milder and sunnier, which makes it a holiday destination. It also has a longer growing season. Lower Ventnor and the neighbouring Undercliff have a particular microclimate, because of their sheltered position south of the downs. The island enjoys 1,800–2,100 hours of sunshine a year. Some years have almost no snow in winter, and only a few days of hard frost. The island is in Hardiness zone 9.",
"title": "Geography"
},
{
"paragraph_id": 56,
"text": "The Isle of Wight is one of the few places in England where the red squirrel is still flourishing; no grey squirrels are to be found. There are occasional sightings of wild deer, and there is a colony of wild goats on Ventnor's downs. Protected species such as the dormouse and rare bats can be found. The island is home to a population of European hedgehogs, and a rescue organisation devoted to them, Save Our Hedgehogs Isle of Wight, was founded in 2019. The Glanville fritillary butterfly's distribution in the United Kingdom is largely restricted to the edges of the island's crumbling cliffs.",
"title": "Geography"
},
{
"paragraph_id": 57,
"text": "A competition in 2002 named the pyramidal orchid as the Isle of Wight's county flower.",
"title": "Geography"
},
{
"paragraph_id": 58,
"text": "The table below shows the regional gross value (in millions of pounds) added by the Isle of Wight economy, at current prices, compiled by the Office for National Statistics.",
"title": "Economy"
},
{
"paragraph_id": 59,
"text": "According to the 2011 census, the island's population of 138,625 lives in 61,085 households, giving an average household size of 2.27 people.",
"title": "Economy"
},
{
"paragraph_id": 60,
"text": "41% of households own their home outright and a further 29% own with a mortgage, so in total 70% of households are owned (compared to 68% for South East England).",
"title": "Economy"
},
{
"paragraph_id": 61,
"text": "Compared to South East England, the island has fewer children (19% aged 0–17 compared to 22% for the South East) and more elderly (24% aged 65+ compared to 16% for the South East), giving an average age of 44 years for an island resident compared to 40 in South East England.",
"title": "Economy"
},
{
"paragraph_id": 62,
"text": "The largest industry on the island is tourism, but it also has a significant agriculture including sheep, dairy farming and arable crops. Traditional agricultural commodities are more difficult to market off the island because of transport costs, but local farmers have succeeded in exploiting some specialist markets, with the higher price of such products absorbing the transport costs. One of the most successful agricultural sectors is now the growing of crops under cover, particularly salad crops including tomatoes and cucumbers. The island has a warmer climate and a longer growing season than much of the United Kingdom. Garlic has been grown in Newchurch for many years, and is, in part, exported to France. This has led to the establishment of an annual Garlic Festival at Newchurch, which is one of the largest events of the local calendar.",
"title": "Economy"
},
{
"paragraph_id": 63,
"text": "A favourable climate supports two vineyards, including one of the oldest in the British Isles at Adgestone. Lavender is grown for its oil. The largest agricultural sector has been dairying, but due to low milk prices and strict legislation for UK milk producers, the dairy industry has been in decline: there were nearly 150 producers in the mid-1980s, but now just 24.",
"title": "Economy"
},
{
"paragraph_id": 64,
"text": "Maritime industries, especially the making of sailcloth and boat building, have long been associated with the island, although this has diminished in recent years. GKN operates what began as the British Hovercraft Corporation, a subsidiary of (and known latterly as) Westland Aircraft, although they have reduced the extent of plant and workforce and sold the main site. Previously it had been the independent company Saunders-Roe, one of the island's most notable historic firms that produced many flying boats and the world's first hovercraft.",
"title": "Economy"
},
{
"paragraph_id": 65,
"text": "Another manufacturing activity is in composite materials, used by boat-builders and the wind turbine manufacturer Vestas, which has a wind turbine blade factory and testing facilities in West Medina Mills and East Cowes.",
"title": "Economy"
},
{
"paragraph_id": 66,
"text": "Bembridge Airfield is the home of Britten-Norman, manufacturers of the Islander and Trislander aircraft. This is shortly to become the site of the European assembly line for Cirrus light aircraft. The Norman Aeroplane Company is a smaller aircraft manufacturing company operating in Sandown. There have been three other firms that built planes on the island.",
"title": "Economy"
},
{
"paragraph_id": 67,
"text": "In 2005, Northern Petroleum began exploratory drilling for oil at its Sandhills-2 borehole at Porchfield, but ceased operations in October that year after failing to find significant reserves.",
"title": "Economy"
},
{
"paragraph_id": 68,
"text": "There are three breweries on the island. Goddards Brewery in Ryde opened in 1993. David Yates, who was head brewer of the Island Brewery, started brewing as Yates Brewery at the Inn at St Lawrence in 2000. Ventnor Brewery, which closed in 2009, was the last incarnation of Burt's Brewery, brewing since the 1840s in Ventnor. Until the 1960s most pubs were owned by Mews Brewery, situated in Newport near the old railway station, but it closed and the pubs were taken over by Strong's, and then by Whitbread. By some accounts Mews beer was apt to be rather cloudy and dark. In the 19th century they pioneered the use of screw top cans for export to British India.",
"title": "Economy"
},
{
"paragraph_id": 69,
"text": "The island's heritage is a major asset that has for many years supported its tourist economy. Holidays focused on natural heritage, including wildlife and geology, are becoming an alternative to the traditional British seaside holiday, which went into decline in the second half of the 20th century due to the increased affordability of foreign holidays. The island is still an important destination for coach tours from other parts of the United Kingdom.",
"title": "Economy"
},
{
"paragraph_id": 70,
"text": "Tourism is still the largest industry, and most island towns and villages offer hotels, hostels and camping sites. In 1999, it hosted 2.7 million visitors, with 1.5 million staying overnight, and 1.2 million day visits; only 150,000 of these were from abroad. Between 1993 and 2000, visits increased at an average rate of 3% per year.",
"title": "Economy"
},
{
"paragraph_id": 71,
"text": "At the turn of the 19th century the island had ten pleasure piers, including two at Ryde and a \"chain pier\" at Seaview. The Victoria Pier in Cowes succeeded the earlier Royal Pier but was itself removed in 1960. The piers at Ryde, Seaview, Sandown, Shanklin and Ventnor originally served a coastal steamer service that operated from Southsea on the mainland. The piers at Seaview, Shanklin, Ventnor and Alum Bay were all destroyed by various storms during the 20th century; only the railway pier at Ryde and the piers at Sandown, Totland Bay (currently closed to the public) and Yarmouth survive.",
"title": "Economy"
},
{
"paragraph_id": 72,
"text": "Blackgang Chine is the oldest theme park in Britain, opened in 1843. The skeleton of a dead whale that its founder Alexander Dabell found in 1844 is still on display.",
"title": "Economy"
},
{
"paragraph_id": 73,
"text": "As well as its more traditional attractions, the island is often host to walking or cycling holidays through the attractive scenery. An annual walking festival has attracted considerable interest. The 70 miles (113 km) Isle of Wight Coastal Path follows the coastline as far as possible, deviating onto roads where the route along the coast is impassable.",
"title": "Economy"
},
{
"paragraph_id": 74,
"text": "The tourist board for the island is Visit Isle of Wight, a non-profit company. It is the Destination Management Organisation for the Isle of Wight, a public and private sector partnership led by the private sector, and consists of over 1,200 companies, including the ferry operators, the local bus company, rail operator and tourism providers working together to collectively promote the island. Its income is derived from the Wight BID, a business improvement district levy fund.",
"title": "Economy"
},
{
"paragraph_id": 75,
"text": "A major contributor to the local economy is sailing and marine-related tourism.",
"title": "Economy"
},
{
"paragraph_id": 76,
"text": "Summer Camp at Camp Beaumont is an attraction at the old Bembridge School site.",
"title": "Economy"
},
{
"paragraph_id": 77,
"text": "The main local newspaper used to be the Isle of Wight County Press, but its circulation has declined over the years, especially since it was taken over by Newsquest in July 2017. In 2018 a new free newspaper was launched, the Isle of Wight Observer. By 2023 the newcomer was distributing 18,500 copies compared to the Isle of Wight County Press's total circulation of 11,575. The Island's leading news website, Island Echo, was launched in May 2012 and now publishes in excess of 5,000 news articles a year. Other online news sources for the Isle of Wight include On the Wight.",
"title": "Economy"
},
{
"paragraph_id": 78,
"text": "The island has a local commercial radio station and a community radio station: commercial station Isle of Wight Radio has broadcast in the medium-wave band since 1990 and on 107.0 MHz (with three smaller transmitters on 102.0 MHz) FM since 1998, as well as streaming on the Internet. Community station Vectis Radio has broadcast online since 2010, and in 2017 started broadcasting on FM 104.6. The station operates from the Riverside Centre in Newport. The island is also covered by a number of local stations on the mainland, including the BBC station BBC Radio Solent broadcast from Southampton. The island's not-for-profit community radio station Angel Radio opened in 2007. Angel Radio began broadcasting on 91.5 MHz from studios in Cowes and a transmitter near Newport.",
"title": "Economy"
},
{
"paragraph_id": 79,
"text": "Important broadcasting infrastructure includes Chillerton Down transmitting station with a mast that is the tallest structure on the island, and Rowridge transmitting station, which broadcasts the main television signal both locally and for most of Hampshire and parts of Dorset and West Sussex.",
"title": "Economy"
},
{
"paragraph_id": 80,
"text": "The local accent is similar to the traditional dialect of Hampshire, featuring the dropping of some consonants and an emphasis on longer vowels. It is similar to the West Country dialects heard in South West England, but less pronounced.",
"title": "Culture"
},
{
"paragraph_id": 81,
"text": "The island has its own local and regional words. Some, such as nipper/nips (a young male person), are still sometimes used and shared with neighbouring areas of the mainland. A few are unique to the island, for example overner and caulkhead (see below). Others are more obscure and now used mainly for comic emphasis, such as mallishag (meaning \"caterpillar\"), gurt meaning \"large\", nammit (a mid-morning snack) and gallybagger (\"scarecrow\", and now the name of a local cheese).",
"title": "Culture"
},
{
"paragraph_id": 82,
"text": "There remains occasional confusion between the Isle of Wight as a county and its former position within Hampshire. The island was regarded and administered as a part of Hampshire until 1890, when its distinct identity was recognised with the formation of Isle of Wight County Council (see also Politics of the Isle of Wight). However, it remained a part of Hampshire until the local government reforms of 1974, when it became a full ceremonial county with its own Lord Lieutenant.",
"title": "Culture"
},
{
"paragraph_id": 83,
"text": "In January 2009, the first general flag for the county was accepted by the Flag Institute.",
"title": "Culture"
},
{
"paragraph_id": 84,
"text": "Island residents are sometimes referred to as \"Vectensians\", \"Vectians\" or, if born on the island, \"caulkheads\". One theory is that this last comes from the once prevalent local industry of caulking or sealing wooden boats; the term became attached to islanders either because they were so employed, or as a derisory term for perceived unintelligent labourers from elsewhere. The term \"overner\" is used for island residents originating from the mainland (an abbreviated form of \"overlander\", which is an archaic term for \"outsider\" still found in parts of Australia).",
"title": "Culture"
},
{
"paragraph_id": 85,
"text": "Residents refer to the island as \"The Island\", as did Jane Austen in Mansfield Park, and sometimes to the UK mainland as \"North Island\".",
"title": "Culture"
},
{
"paragraph_id": 86,
"text": "To promote the island's identity and culture, the High Sheriff, Robin Courage, founded an Isle of Wight Day; the first was held on Saturday 24 September 2016.",
"title": "Culture"
},
{
"paragraph_id": 87,
"text": "Sport plays a key part of culture on the Isle of Wight. Sports include golf, marathon, cycling and sailing.",
"title": "Culture"
},
{
"paragraph_id": 88,
"text": "The island is home to the Isle of Wight Festival and until 2016, Bestival, before it was relocated to Lulworth Estate in Dorset. In 1970, the festival was headlined by Jimi Hendrix attracting an audience of 600,000, some six times the local population at the time. It is the home of the bands The Bees, Trixie's Big Red Motorbike, Level 42, and Wet Leg.",
"title": "Culture"
},
{
"paragraph_id": 89,
"text": "The Isle of Wight has 489 miles (787 km) of roadway. It does not have a motorway, although there is a short stretch of dual carriageway towards the north of Newport near the hospital and prison.",
"title": "Transport"
},
{
"paragraph_id": 90,
"text": "A comprehensive bus network operated by Southern Vectis links most settlements, with Newport as its central hub.",
"title": "Transport"
},
{
"paragraph_id": 91,
"text": "Journeys away from the island involve a ferry journey. Car ferry and passenger catamaran services are run by Wightlink and Red Funnel, and a hovercraft passenger service (the only such remaining in the world) by Hovertravel.",
"title": "Transport"
},
{
"paragraph_id": 92,
"text": "The island formerly had its own railway network of over 55 miles (89 km), but only one line remains in regular use. The Island Line is part of the United Kingdom's National Rail network, running a little under 9 miles (14 km) from Shanklin to Ryde Pier Head, where there is a connecting ferry service to Portsmouth Harbour station on the mainland network. The line was opened by the Isle of Wight Railway in 1864, and from 1996 to 2007 was run by the smallest train operating company on the network, Island Line Trains. It is notable for utilising old ex-London Underground rolling stock, due to the small size of its tunnels and unmodernised signalling. Branching off the Island Line at Smallbrook Junction is the heritage Isle of Wight Steam Railway, which runs for 5+1⁄2 miles (8.9 km) to the outskirts of Wootton on the former line to Newport.",
"title": "Transport"
},
{
"paragraph_id": 93,
"text": "There are two airfields for general aviation, Isle of Wight Airport at Sandown and Bembridge Airport.",
"title": "Transport"
},
{
"paragraph_id": 94,
"text": "The island has over 200 miles (322 km) of cycleways, many of which can be enjoyed off-road. The principal trails are:",
"title": "Transport"
},
{
"paragraph_id": 95,
"text": "The Isle of Wight is near the densely populated south of England, yet separated from the mainland. This position led to it hosting three prisons: Albany, Camp Hill and Parkhurst, all located outside Newport near the main road to Cowes. Albany and Parkhurst were among the few Category A prisons in the UK until they were downgraded in the 1990s. The downgrading of Parkhurst was precipitated by a major escape: three prisoners (two murderers and a blackmailer) escaped from the prison on 3 January 1995 for four days, before being recaptured. Parkhurst enjoyed notoriety as one of the toughest jails in the United Kingdom, and housed many notable inmates including the Yorkshire Ripper Peter Sutcliffe, New Zealand drug lord Terry Clark and the Kray twins.",
"title": "Prisons"
},
{
"paragraph_id": 96,
"text": "Camp Hill is located adjacent but to the west of Albany and Parkhurst, on the very edge of Parkhurst Forest, having been converted first to a borstal and later to a Category C prison. It was built on the site of an army camp (both Albany and Parkhurst were barracks); there is a small estate of tree-lined roads with the former officers' quarters (now privately owned) to the south and east. Camp Hill closed as a prison in March 2013.",
"title": "Prisons"
},
{
"paragraph_id": 97,
"text": "The management of all three prisons was merged into a single administration, under HMP Isle of Wight in April 2009.",
"title": "Prisons"
},
{
"paragraph_id": 98,
"text": "There are 69 local education authority-maintained schools on the Isle of Wight, and two independent schools. As a rural community, many of these are small and with fewer pupils than in urban areas. The Isle of Wight College is located on the outskirts of Newport.",
"title": "Education"
},
{
"paragraph_id": 99,
"text": "From September 2010, there was a transition period from the three-tier system of primary, middle and high schools to the two-tier system that is usual in England. Some schools have now closed, such as Chale C.E. Primary. Others have become \"federated\", such as Brading C.E. Primary and St Helen's Primary. Christ the King College started as two \"middle schools\", Trinity Middle School and Archbishop King Catholic Middle School, but has now been converted into a dual-faith secondary school and sixth form.",
"title": "Education"
},
{
"paragraph_id": 100,
"text": "Since September 2011 five new secondary schools, with an age range of 11 to 18 years, replaced the island's high schools (as a part of the previous three-tier system).",
"title": "Education"
},
{
"paragraph_id": 101,
"text": "Notable residents have included:",
"title": "Notable people"
},
{
"paragraph_id": 102,
"text": "The Isle of Wight has given names to many parts of former colonies, most notably Isle of Wight County in Virginia founded by settlers from the island in the 17th century. Its county seat is a town named Isle of Wight.",
"title": "Overseas names"
},
{
"paragraph_id": 103,
"text": "Other notable examples include:",
"title": "Overseas names"
},
{
"paragraph_id": 104,
"text": "",
"title": "External links"
}
]
| The Isle of Wight is an island, English county and unitary authority in the English Channel, 2 to 5 miles off the coast of Hampshire, across the Solent. It is the largest and second-most populous island in England. Referred to as "The Island" by residents, the Isle of Wight has resorts that have been popular holiday destinations since Victorian times. It is known for its mild climate, coastal scenery, and verdant landscape of fields, downland, and chines. The island is historically part of Hampshire. The island is designated a UNESCO Biosphere Reserve. The island has been home to the poets Algernon Charles Swinburne and Alfred, Lord Tennyson. Queen Victoria built her summer residence and final home, Osborne House at East Cowes on the Isle. It has a maritime and industrial tradition of boat-building, sail-making, the manufacture of flying boats, hovercraft, and Britain's space rockets. The island hosts annual music festivals, including the Isle of Wight Festival, which in 1970 was the largest rock music event ever held. It has well-conserved wildlife and some of Europe's richest cliffs and quarries of dinosaur fossils. The island has played an essential part in the defence of the ports of Southampton and Portsmouth and has been near the front line of conflicts through the ages, having faced the Spanish Armada and weathered the Battle of Britain. Being rural for most of its history, its Victorian fashionability and the growing affordability of holidays led to significant urban development during the late 19th and early 20th centuries. The island became a separate administrative county in 1890, independent of Hampshire. It continued to share the Lord Lieutenant of Hampshire until 1974, when it was made a ceremonial county in its own right. The island no longer has administrative links to Hampshire. However, the two counties share their police force and fire and rescue service, and the island's Anglican churches belong to the Diocese of Portsmouth. A combined local authority with Portsmouth and Southampton was considered as part of a regional devolution package but was subsequently rejected by the UK government in 2018. The quickest public transport link to the mainland is the hovercraft (Hovertravel) from Ryde to Southsea. Three vehicle ferries and two catamaran services cross the Solent to Southampton, Lymington, and Portsmouth via the island's largest ferry operator, Wightlink, and the island's second-largest ferry company, Red Funnel. Tourism is the largest industry on the island. | 2001-10-08T20:00:29Z | 2023-12-29T16:07:56Z | [
"Template:Div col",
"Template:Cite news",
"Template:Cite journal",
"Template:Wiktionary",
"Template:SE England",
"Template:Unitary authorities of England",
"Template:Other places",
"Template:Advert",
"Template:Notelist",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite EB1911",
"Template:Wikivoyage",
"Template:Isle of Wight",
"Template:Weather box",
"Template:Convert",
"Template:EngPlacesKey",
"Template:Commons",
"Template:Curlie",
"Template:Short description",
"Template:Efn",
"Template:Circa",
"Template:Efn-lr",
"Template:When",
"Template:Reflist",
"Template:England counties",
"Template:Use dmy dates",
"Template:Redirect",
"Template:Cbignore",
"Template:ISBN missing",
"Template:Rws",
"Template:Cite book",
"Template:See also",
"Template:Main",
"Template:Clear right",
"Template:Authority control",
"Template:Infobox English county",
"Template:Notelist-lr",
"Template:Columns-list",
"Template:Div col end",
"Template:Portal",
"Template:Cite legislation UK",
"Template:External media",
"Template:Use British English"
]
| https://en.wikipedia.org/wiki/Isle_of_Wight |
15,107 | Internet Control Message Protocol | The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address, for example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications (with the exception of some diagnostic tools like ping and traceroute).
ICMP for IPv4 is defined in RFC 792. A separate ICMPv6, defined by RFC 4443, is used with IPv6.
ICMP is part of the Internet protocol suite as defined in RFC 792. ICMP messages are typically used for diagnostic or control purposes or generated in response to errors in IP operations (as specified in RFC 1122). ICMP errors are directed to the source IP address of the originating packet.
For example, every device (such as an intermediate router) forwarding an IP datagram first decrements the time to live (TTL) field in the IP header by one. If the resulting TTL is 0, the packet is discarded and an ICMP time exceeded in transit message is sent to the datagram's source address.
Many commonly used network utilities are based on ICMP messages. The traceroute command can be implemented by transmitting IP datagrams with specially set IP TTL header fields, and looking for ICMP time exceeded in transit and Destination unreachable messages generated in response. The related ping utility is implemented using the ICMP echo request and echo reply messages.
ICMP uses the basic support of IP as if it were a higher-level protocol, however, ICMP is actually an integral part of IP. Although ICMP messages are contained within standard IP packets, ICMP messages are usually processed as a special case, distinguished from normal IP processing. In many cases, it is necessary to inspect the contents of the ICMP message and deliver the appropriate error message to the application responsible for transmitting the IP packet that prompted the ICMP message to be sent.
ICMP is a network-layer protocol, this makes it layer 3 protocol by the 7 layer OSI model. Based on the 4 layer TCP/IP model, ICMP is an internet-layer protocol, which makes it layer 2 protocol (internet standard RFC 1122 TCP/IP model with 4 layers) or layer 3 protocol based on modern 5 layer TCP/IP protocol definitions (by Kozierok, Comer, Tanenbaum, Forouzan, Kurose, Stallings).
There is no TCP or UDP port number associated with ICMP packets as these numbers are associated with the transport layer above.
The ICMP packet is encapsulated in an IPv4 packet. The packet consists of header and data sections.
The ICMP header starts after the IPv4 header and is identified by IP protocol number '1'. All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the header have fixed format, while the last 4 bytes depend on the type/code of that ICMP packet.
ICMP error messages contain a data section that includes a copy of the entire IPv4 header, plus at least the first eight bytes of data from the IPv4 packet that caused the error message. The length of ICMP error messages should not exceed 576 bytes. This data is used by the host to match the message to the appropriate process. If a higher level protocol uses port numbers, they are assumed to be in the first eight bytes of the original datagram's data.
The variable size of the ICMP packet data section has been exploited. In the "Ping of death", large or fragmented ICMP packets are used for denial-of-service attacks. ICMP data can also be used to create covert channels for communication. These channels are known as ICMP tunnels.
Control messages are identified by the value in the type field. The code field gives additional context information for the message. Some control messages have been deprecated since the protocol was first introduced.
Source Quench requests that the sender decrease the rate of messages sent to a router or host. This message may be generated if a router or host does not have sufficient buffer space to process the request, or may occur if the router or host buffer is approaching its limit.
Data is sent at a very high speed from a host or from several hosts at the same time to a particular router on a network. Although a router has buffering capabilities, the buffering is limited to within a specified range. The router cannot queue any more data than the capacity of the limited buffering space. Thus if the queue gets filled up, incoming data is discarded until the queue is no longer full. But as no acknowledgement mechanism is present in the network layer, the client does not know whether the data has reached the destination successfully. Hence some remedial measures should be taken by the network layer to avoid these kind of situations. These measures are referred to as source quench.
In a source quench mechanism, the router sees that the incoming data rate is much faster than the outgoing data rate, and sends an ICMP message to the clients, informing them that they should slow down their data transfer speeds or wait for a certain amount of time before attempting to send more data. When a client receives this message, it automatically slows down the outgoing data rate or waits for a sufficient amount of time, which enables the router to empty the queue. Thus the source quench ICMP message acts as flow control in the network layer.
Since research suggested that "ICMP Source Quench [was] an ineffective (and unfair) antidote for congestion", routers' creation of source quench messages was deprecated in 1995 by RFC 1812. Furthermore, forwarding of and any kind of reaction to (flow control actions) source quench messages was deprecated from 2012 by RFC 6633.
Where:
Redirect requests data packets be sent on an alternative route. ICMP Redirect is a mechanism for routers to convey routing information to hosts. The message informs a host to update its routing information (to send packets on an alternative route). If a host tries to send data through a router (R1) and R1 sends the data on another router (R2) and a direct path from the host to R2 is available (that is, the host and R2 are on the same subnetwork), then R1 will send a redirect message to inform the host that the best route for the destination is via R2. The host should then change its route information and send packets for that destination directly to R2. The router will still send the original datagram to the intended destination. However, if the datagram contains routing information, this message will not be sent even if a better route is available. RFC 1122 states that redirects should only be sent by gateways and should not be sent by Internet hosts.
Where:
Time Exceeded is generated by a gateway to inform the source of a discarded datagram due to the time to live field reaching zero. A time exceeded message may also be sent by a host if it fails to reassemble a fragmented datagram within its time limit.
Time exceeded messages are used by the traceroute utility to identify gateways on the path between two hosts.
Where:
Timestamp is used for time synchronization. The originating timestamp is set to the time (in milliseconds since midnight) the sender last touched the packet. The receive and transmit timestamps are not used.
Where:
Timestamp Reply replies to a Timestamp message. It consists of the originating timestamp sent by the sender of the Timestamp as well as a receive timestamp indicating when the Timestamp was received and a transmit timestamp indicating when the Timestamp reply was sent.
Where:
The use of Timestamp and Timestamp Reply messages to synchronize the clocks of Internet nodes has largely been replaced by the UDP-based Network Time Protocol and the Precision Time Protocol.
Address mask request is normally sent by a host to a router in order to obtain an appropriate subnet mask.
Recipients should reply to this message with an Address mask reply message.
Where:
ICMP Address Mask Request may be used as a part of reconnaissance attack to gather information on the target network, therefore ICMP Address Mask Reply is disabled by default on Cisco IOS.
Address mask reply is used to reply to an address mask request message with an appropriate subnet mask.
Where:
Destination unreachable is generated by the host or its inbound gateway to inform the client that the destination is unreachable for some reason. Reasons for this message may include: the physical connection to the host does not exist (distance is infinite); the indicated protocol or port is not active; the data must be fragmented but the 'don't fragment' flag is on. Unreachable TCP ports notably respond with TCP RST rather than a destination unreachable type 3 as might be expected. Destination unreachable is never reported for IP multicast transmissions.
Where: | [
{
"paragraph_id": 0,
"text": "The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address, for example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications (with the exception of some diagnostic tools like ping and traceroute).",
"title": ""
},
{
"paragraph_id": 1,
"text": "ICMP for IPv4 is defined in RFC 792. A separate ICMPv6, defined by RFC 4443, is used with IPv6.",
"title": ""
},
{
"paragraph_id": 2,
"text": "ICMP is part of the Internet protocol suite as defined in RFC 792. ICMP messages are typically used for diagnostic or control purposes or generated in response to errors in IP operations (as specified in RFC 1122). ICMP errors are directed to the source IP address of the originating packet.",
"title": "Technical details"
},
{
"paragraph_id": 3,
"text": "For example, every device (such as an intermediate router) forwarding an IP datagram first decrements the time to live (TTL) field in the IP header by one. If the resulting TTL is 0, the packet is discarded and an ICMP time exceeded in transit message is sent to the datagram's source address.",
"title": "Technical details"
},
{
"paragraph_id": 4,
"text": "Many commonly used network utilities are based on ICMP messages. The traceroute command can be implemented by transmitting IP datagrams with specially set IP TTL header fields, and looking for ICMP time exceeded in transit and Destination unreachable messages generated in response. The related ping utility is implemented using the ICMP echo request and echo reply messages.",
"title": "Technical details"
},
{
"paragraph_id": 5,
"text": "ICMP uses the basic support of IP as if it were a higher-level protocol, however, ICMP is actually an integral part of IP. Although ICMP messages are contained within standard IP packets, ICMP messages are usually processed as a special case, distinguished from normal IP processing. In many cases, it is necessary to inspect the contents of the ICMP message and deliver the appropriate error message to the application responsible for transmitting the IP packet that prompted the ICMP message to be sent.",
"title": "Technical details"
},
{
"paragraph_id": 6,
"text": "ICMP is a network-layer protocol, this makes it layer 3 protocol by the 7 layer OSI model. Based on the 4 layer TCP/IP model, ICMP is an internet-layer protocol, which makes it layer 2 protocol (internet standard RFC 1122 TCP/IP model with 4 layers) or layer 3 protocol based on modern 5 layer TCP/IP protocol definitions (by Kozierok, Comer, Tanenbaum, Forouzan, Kurose, Stallings).",
"title": "Technical details"
},
{
"paragraph_id": 7,
"text": "There is no TCP or UDP port number associated with ICMP packets as these numbers are associated with the transport layer above.",
"title": "Technical details"
},
{
"paragraph_id": 8,
"text": "The ICMP packet is encapsulated in an IPv4 packet. The packet consists of header and data sections.",
"title": "Datagram structure"
},
{
"paragraph_id": 9,
"text": "The ICMP header starts after the IPv4 header and is identified by IP protocol number '1'. All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the header have fixed format, while the last 4 bytes depend on the type/code of that ICMP packet.",
"title": "Datagram structure"
},
{
"paragraph_id": 10,
"text": "ICMP error messages contain a data section that includes a copy of the entire IPv4 header, plus at least the first eight bytes of data from the IPv4 packet that caused the error message. The length of ICMP error messages should not exceed 576 bytes. This data is used by the host to match the message to the appropriate process. If a higher level protocol uses port numbers, they are assumed to be in the first eight bytes of the original datagram's data.",
"title": "Datagram structure"
},
{
"paragraph_id": 11,
"text": "The variable size of the ICMP packet data section has been exploited. In the \"Ping of death\", large or fragmented ICMP packets are used for denial-of-service attacks. ICMP data can also be used to create covert channels for communication. These channels are known as ICMP tunnels.",
"title": "Datagram structure"
},
{
"paragraph_id": 12,
"text": "Control messages are identified by the value in the type field. The code field gives additional context information for the message. Some control messages have been deprecated since the protocol was first introduced.",
"title": "Control messages"
},
{
"paragraph_id": 13,
"text": "Source Quench requests that the sender decrease the rate of messages sent to a router or host. This message may be generated if a router or host does not have sufficient buffer space to process the request, or may occur if the router or host buffer is approaching its limit.",
"title": "Control messages"
},
{
"paragraph_id": 14,
"text": "Data is sent at a very high speed from a host or from several hosts at the same time to a particular router on a network. Although a router has buffering capabilities, the buffering is limited to within a specified range. The router cannot queue any more data than the capacity of the limited buffering space. Thus if the queue gets filled up, incoming data is discarded until the queue is no longer full. But as no acknowledgement mechanism is present in the network layer, the client does not know whether the data has reached the destination successfully. Hence some remedial measures should be taken by the network layer to avoid these kind of situations. These measures are referred to as source quench.",
"title": "Control messages"
},
{
"paragraph_id": 15,
"text": "In a source quench mechanism, the router sees that the incoming data rate is much faster than the outgoing data rate, and sends an ICMP message to the clients, informing them that they should slow down their data transfer speeds or wait for a certain amount of time before attempting to send more data. When a client receives this message, it automatically slows down the outgoing data rate or waits for a sufficient amount of time, which enables the router to empty the queue. Thus the source quench ICMP message acts as flow control in the network layer.",
"title": "Control messages"
},
{
"paragraph_id": 16,
"text": "Since research suggested that \"ICMP Source Quench [was] an ineffective (and unfair) antidote for congestion\", routers' creation of source quench messages was deprecated in 1995 by RFC 1812. Furthermore, forwarding of and any kind of reaction to (flow control actions) source quench messages was deprecated from 2012 by RFC 6633.",
"title": "Control messages"
},
{
"paragraph_id": 17,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 18,
"text": "Redirect requests data packets be sent on an alternative route. ICMP Redirect is a mechanism for routers to convey routing information to hosts. The message informs a host to update its routing information (to send packets on an alternative route). If a host tries to send data through a router (R1) and R1 sends the data on another router (R2) and a direct path from the host to R2 is available (that is, the host and R2 are on the same subnetwork), then R1 will send a redirect message to inform the host that the best route for the destination is via R2. The host should then change its route information and send packets for that destination directly to R2. The router will still send the original datagram to the intended destination. However, if the datagram contains routing information, this message will not be sent even if a better route is available. RFC 1122 states that redirects should only be sent by gateways and should not be sent by Internet hosts.",
"title": "Control messages"
},
{
"paragraph_id": 19,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 20,
"text": "Time Exceeded is generated by a gateway to inform the source of a discarded datagram due to the time to live field reaching zero. A time exceeded message may also be sent by a host if it fails to reassemble a fragmented datagram within its time limit.",
"title": "Control messages"
},
{
"paragraph_id": 21,
"text": "Time exceeded messages are used by the traceroute utility to identify gateways on the path between two hosts.",
"title": "Control messages"
},
{
"paragraph_id": 22,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 23,
"text": "Timestamp is used for time synchronization. The originating timestamp is set to the time (in milliseconds since midnight) the sender last touched the packet. The receive and transmit timestamps are not used.",
"title": "Control messages"
},
{
"paragraph_id": 24,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 25,
"text": "Timestamp Reply replies to a Timestamp message. It consists of the originating timestamp sent by the sender of the Timestamp as well as a receive timestamp indicating when the Timestamp was received and a transmit timestamp indicating when the Timestamp reply was sent.",
"title": "Control messages"
},
{
"paragraph_id": 26,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 27,
"text": "The use of Timestamp and Timestamp Reply messages to synchronize the clocks of Internet nodes has largely been replaced by the UDP-based Network Time Protocol and the Precision Time Protocol.",
"title": "Control messages"
},
{
"paragraph_id": 28,
"text": "Address mask request is normally sent by a host to a router in order to obtain an appropriate subnet mask.",
"title": "Control messages"
},
{
"paragraph_id": 29,
"text": "Recipients should reply to this message with an Address mask reply message.",
"title": "Control messages"
},
{
"paragraph_id": 30,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 31,
"text": "ICMP Address Mask Request may be used as a part of reconnaissance attack to gather information on the target network, therefore ICMP Address Mask Reply is disabled by default on Cisco IOS.",
"title": "Control messages"
},
{
"paragraph_id": 32,
"text": "Address mask reply is used to reply to an address mask request message with an appropriate subnet mask.",
"title": "Control messages"
},
{
"paragraph_id": 33,
"text": "Where:",
"title": "Control messages"
},
{
"paragraph_id": 34,
"text": "Destination unreachable is generated by the host or its inbound gateway to inform the client that the destination is unreachable for some reason. Reasons for this message may include: the physical connection to the host does not exist (distance is infinite); the indicated protocol or port is not active; the data must be fragmented but the 'don't fragment' flag is on. Unreachable TCP ports notably respond with TCP RST rather than a destination unreachable type 3 as might be expected. Destination unreachable is never reported for IP multicast transmissions.",
"title": "Control messages"
},
{
"paragraph_id": 35,
"text": "Where:",
"title": "Control messages"
}
]
| The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address, for example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications. ICMP for IPv4 is defined in RFC 792. A separate ICMPv6, defined by RFC 4443, is used with IPv6. | 2001-10-02T08:36:20Z | 2023-12-28T13:48:51Z | [
"Template:CN",
"Template:Anchor",
"Template:Table-experimental",
"Template:Cite ietf",
"Template:Cite rfc",
"Template:Wikiversity",
"Template:Short description",
"Template:N/a",
"Template:Dc",
"Template:Cite IETF",
"Template:Reflist",
"Template:Web archive",
"Template:About",
"Template:IETF RFC",
"Template:Slink",
"Template:Rp",
"Template:Authority control",
"Template:Infobox networking protocol",
"Template:IPstack",
"Template:Cite book",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol |
15,108 | ICMP | ICMP may refer to: | [
{
"paragraph_id": 0,
"text": "ICMP may refer to:",
"title": ""
}
]
| ICMP may refer to: | 2020-12-13T05:31:42Z | [
"Template:Disambiguation",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/ICMP |
|
15,109 | Inverse limit | In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to "glue together" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory.
By working in the dual category, that is by reverting the arrows, an inverse limit becomes a direct limit or inductive limit, and a limit becomes a colimit.
We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let ( I , ≤ ) {\displaystyle (I,\leq )} be a directed poset (not all authors require I to be directed). Let (Ai)i∈I be a family of groups and suppose we have a family of homomorphisms f i j : A j → A i {\displaystyle f_{ij}:A_{j}\to A_{i}} for all i ≤ j {\displaystyle i\leq j} (note the order) with the following properties:
Then the pair ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})} is called an inverse system of groups and morphisms over I {\displaystyle I} , and the morphisms f i j {\displaystyle f_{ij}} are called the transition morphisms of the system.
We define the inverse limit of the inverse system ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})} as a particular subgroup of the direct product of the A i {\displaystyle A_{i}} 's:
The inverse limit A {\displaystyle A} comes equipped with natural projections πi: A → Ai which pick out the ith component of the direct product for each i {\displaystyle i} in I {\displaystyle I} . The inverse limit and the natural projections satisfy a universal property described in the next section.
This same construction may be carried out if the A i {\displaystyle A_{i}} 's are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category.
The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let ( X i , f i j ) {\textstyle (X_{i},f_{ij})} be an inverse system of objects and morphisms in a category C (same definition as above). The inverse limit of this system is an object X in C together with morphisms πi: X → Xi (called projections) satisfying πi = f i j {\displaystyle f_{ij}} ∘ πj for all i ≤ j. The pair (X, πi) must be universal in the sense that for any other such pair (Y, ψi) there exists a unique morphism u: Y → X such that the diagram
commutes for all i ≤ j. The inverse limit is often denoted
with the inverse system ( X i , f i j ) {\textstyle (X_{i},f_{ij})} being understood.
In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limits X and X' of an inverse system, there exists a unique isomorphism X′ → X commuting with the projection maps.
Inverse systems and inverse limits in a category C admit an alternative description in terms of functors. Any partially ordered set I can be considered as a small category where the morphisms consist of arrows i → j if and only if i ≤ j. An inverse system is then just a contravariant functor I → C. Let C I o p {\displaystyle C^{I^{\mathrm {op} }}} be the category of these functors (with natural transformations as morphisms). An object X of C can be considered a trivial inverse system, where all objects are equal to X and all arrow are the identity of X. This defines a "trivial functor" from C to C I o p . {\displaystyle C^{I^{\mathrm {op} }}.} The inverse limit, if it exists, is defined as a right adjoint of this trivial functor.
For an abelian category C, the inverse limit functor
is left exact. If I is ordered (not simply partially ordered) and countable, and C is the category Ab of abelian groups, the Mittag-Leffler condition is a condition on the transition morphisms fij that ensures the exactness of lim ← {\displaystyle \varprojlim } . Specifically, Eilenberg constructed a functor
(pronounced "lim one") such that if (Ai, fij), (Bi, gij), and (Ci, hij) are three inverse systems of abelian groups, and
is a short exact sequence of inverse systems, then
is an exact sequence in Ab.
If the ranges of the morphisms of an inverse system of abelian groups (Ai, fij) are stationary, that is, for every k there exists j ≥ k such that for all i ≥ j : f k j ( A j ) = f k i ( A i ) {\displaystyle f_{kj}(A_{j})=f_{ki}(A_{i})} one says that the system satisfies the Mittag-Leffler condition.
The name "Mittag-Leffler" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof of Mittag-Leffler's theorem.
The following situations are examples where the Mittag-Leffler condition is satisfied:
An example where lim ← 1 {\displaystyle \varprojlim {}^{1}} is non-zero is obtained by taking I to be the non-negative integers, letting Ai = pZ, Bi = Z, and Ci = Bi / Ai = Z/pZ. Then
where Zp denotes the p-adic integers.
More generally, if C is an arbitrary abelian category that has enough injectives, then so does C, and the right derived functors of the inverse limit functor can thus be defined. The nth right derived functor is denoted
In the case where C satisfies Grothendieck's axiom (AB4*), Jan-Erik Roos generalized the functor lim on Ab to series of functors lim such that
It was thought for almost 40 years that Roos had proved (in Sur les foncteurs dérivés de lim. Applications. ) that lim Ai = 0 for (Ai, fij) an inverse system with surjective transition morphisms and I the set of non-negative integers (such inverse systems are often called "Mittag-Leffler sequences"). However, in 2002, Amnon Neeman and Pierre Deligne constructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim Ai ≠ 0. Roos has since shown (in "Derived functors of inverse limits revisited") that his result is correct if C has a set of generators (in addition to satisfying (AB3) and (AB4*)).
Barry Mitchell has shown (in "The cohomological dimension of a directed set") that if I has cardinality ℵ d {\displaystyle \aleph _{d}} (the dth infinite cardinal), then Rlim is zero for all n ≥ d + 2. This applies to the I-indexed diagrams in the category of R-modules, with R a commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' "Derived functors of inverse limits revisited" for examples of abelian categories in which lim, on diagrams indexed by a countable set, is nonzero for n > 1).
The categorical dual of an inverse limit is a direct limit (or inductive limit). More general concepts are the limits and colimits of category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits. | [
{
"paragraph_id": 0,
"text": "In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to \"glue together\" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory.",
"title": ""
},
{
"paragraph_id": 1,
"text": "By working in the dual category, that is by reverting the arrows, an inverse limit becomes a direct limit or inductive limit, and a limit becomes a colimit.",
"title": ""
},
{
"paragraph_id": 2,
"text": "We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let ( I , ≤ ) {\\displaystyle (I,\\leq )} be a directed poset (not all authors require I to be directed). Let (Ai)i∈I be a family of groups and suppose we have a family of homomorphisms f i j : A j → A i {\\displaystyle f_{ij}:A_{j}\\to A_{i}} for all i ≤ j {\\displaystyle i\\leq j} (note the order) with the following properties:",
"title": "Formal definition"
},
{
"paragraph_id": 3,
"text": "Then the pair ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\\displaystyle ((A_{i})_{i\\in I},(f_{ij})_{i\\leq j\\in I})} is called an inverse system of groups and morphisms over I {\\displaystyle I} , and the morphisms f i j {\\displaystyle f_{ij}} are called the transition morphisms of the system.",
"title": "Formal definition"
},
{
"paragraph_id": 4,
"text": "We define the inverse limit of the inverse system ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\\displaystyle ((A_{i})_{i\\in I},(f_{ij})_{i\\leq j\\in I})} as a particular subgroup of the direct product of the A i {\\displaystyle A_{i}} 's:",
"title": "Formal definition"
},
{
"paragraph_id": 5,
"text": "The inverse limit A {\\displaystyle A} comes equipped with natural projections πi: A → Ai which pick out the ith component of the direct product for each i {\\displaystyle i} in I {\\displaystyle I} . The inverse limit and the natural projections satisfy a universal property described in the next section.",
"title": "Formal definition"
},
{
"paragraph_id": 6,
"text": "This same construction may be carried out if the A i {\\displaystyle A_{i}} 's are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category.",
"title": "Formal definition"
},
{
"paragraph_id": 7,
"text": "The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let ( X i , f i j ) {\\textstyle (X_{i},f_{ij})} be an inverse system of objects and morphisms in a category C (same definition as above). The inverse limit of this system is an object X in C together with morphisms πi: X → Xi (called projections) satisfying πi = f i j {\\displaystyle f_{ij}} ∘ πj for all i ≤ j. The pair (X, πi) must be universal in the sense that for any other such pair (Y, ψi) there exists a unique morphism u: Y → X such that the diagram",
"title": "Formal definition"
},
{
"paragraph_id": 8,
"text": "commutes for all i ≤ j. The inverse limit is often denoted",
"title": "Formal definition"
},
{
"paragraph_id": 9,
"text": "with the inverse system ( X i , f i j ) {\\textstyle (X_{i},f_{ij})} being understood.",
"title": "Formal definition"
},
{
"paragraph_id": 10,
"text": "In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limits X and X' of an inverse system, there exists a unique isomorphism X′ → X commuting with the projection maps.",
"title": "Formal definition"
},
{
"paragraph_id": 11,
"text": "Inverse systems and inverse limits in a category C admit an alternative description in terms of functors. Any partially ordered set I can be considered as a small category where the morphisms consist of arrows i → j if and only if i ≤ j. An inverse system is then just a contravariant functor I → C. Let C I o p {\\displaystyle C^{I^{\\mathrm {op} }}} be the category of these functors (with natural transformations as morphisms). An object X of C can be considered a trivial inverse system, where all objects are equal to X and all arrow are the identity of X. This defines a \"trivial functor\" from C to C I o p . {\\displaystyle C^{I^{\\mathrm {op} }}.} The inverse limit, if it exists, is defined as a right adjoint of this trivial functor.",
"title": "Formal definition"
},
{
"paragraph_id": 12,
"text": "For an abelian category C, the inverse limit functor",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 13,
"text": "is left exact. If I is ordered (not simply partially ordered) and countable, and C is the category Ab of abelian groups, the Mittag-Leffler condition is a condition on the transition morphisms fij that ensures the exactness of lim ← {\\displaystyle \\varprojlim } . Specifically, Eilenberg constructed a functor",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 14,
"text": "(pronounced \"lim one\") such that if (Ai, fij), (Bi, gij), and (Ci, hij) are three inverse systems of abelian groups, and",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 15,
"text": "is a short exact sequence of inverse systems, then",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 16,
"text": "is an exact sequence in Ab.",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 17,
"text": "If the ranges of the morphisms of an inverse system of abelian groups (Ai, fij) are stationary, that is, for every k there exists j ≥ k such that for all i ≥ j : f k j ( A j ) = f k i ( A i ) {\\displaystyle f_{kj}(A_{j})=f_{ki}(A_{i})} one says that the system satisfies the Mittag-Leffler condition.",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 18,
"text": "The name \"Mittag-Leffler\" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof of Mittag-Leffler's theorem.",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 19,
"text": "The following situations are examples where the Mittag-Leffler condition is satisfied:",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 20,
"text": "An example where lim ← 1 {\\displaystyle \\varprojlim {}^{1}} is non-zero is obtained by taking I to be the non-negative integers, letting Ai = pZ, Bi = Z, and Ci = Bi / Ai = Z/pZ. Then",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 21,
"text": "where Zp denotes the p-adic integers.",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 22,
"text": "More generally, if C is an arbitrary abelian category that has enough injectives, then so does C, and the right derived functors of the inverse limit functor can thus be defined. The nth right derived functor is denoted",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 23,
"text": "In the case where C satisfies Grothendieck's axiom (AB4*), Jan-Erik Roos generalized the functor lim on Ab to series of functors lim such that",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 24,
"text": "It was thought for almost 40 years that Roos had proved (in Sur les foncteurs dérivés de lim. Applications. ) that lim Ai = 0 for (Ai, fij) an inverse system with surjective transition morphisms and I the set of non-negative integers (such inverse systems are often called \"Mittag-Leffler sequences\"). However, in 2002, Amnon Neeman and Pierre Deligne constructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim Ai ≠ 0. Roos has since shown (in \"Derived functors of inverse limits revisited\") that his result is correct if C has a set of generators (in addition to satisfying (AB3) and (AB4*)).",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 25,
"text": "Barry Mitchell has shown (in \"The cohomological dimension of a directed set\") that if I has cardinality ℵ d {\\displaystyle \\aleph _{d}} (the dth infinite cardinal), then Rlim is zero for all n ≥ d + 2. This applies to the I-indexed diagrams in the category of R-modules, with R a commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' \"Derived functors of inverse limits revisited\" for examples of abelian categories in which lim, on diagrams indexed by a countable set, is nonzero for n > 1).",
"title": "Derived functors of the inverse limit"
},
{
"paragraph_id": 26,
"text": "The categorical dual of an inverse limit is a direct limit (or inductive limit). More general concepts are the limits and colimits of category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits.",
"title": "Related concepts and generalizations"
}
]
| In mathematics, the inverse limit is a construction that allows one to "glue together" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory. By working in the dual category, that is by reverting the arrows, an inverse limit becomes a direct limit or inductive limit, and a limit becomes a colimit. | 2001-10-02T10:49:52Z | 2023-08-20T19:04:03Z | [
"Template:Short description",
"Template:Math",
"Template:Pi",
"Template:ISBN",
"Template:Citation",
"Template:Weibel IHA",
"Template:Category theory"
]
| https://en.wikipedia.org/wiki/Inverse_limit |
15,111 | Interplanetary spaceflight | Interplanetary spaceflight or interplanetary travel is the crewed or uncrewed travel between stars and planets, usually within a single planetary system. In practice, spaceflights of this type are confined to travel between the planets of the Solar System. Uncrewed space probes have flown to all the observed planets in the Solar System as well as to dwarf planets Pluto and Ceres, and several asteroids. Orbiters and landers return more information than fly-by missions. Crewed flights have landed on the Moon and have been planned, from time to time, for Mars, Venus and Mercury. While many scientists appreciate the knowledge value that uncrewed flights provide, the value of crewed missions is more controversial. Science fiction writers propose a number of benefits, including the mining of asteroids, access to solar power, and room for colonization in the event of an Earth catastrophe.
A number of techniques have been developed to make interplanetary flights more economical. Advances in computing and theoretical science have already improved some techniques, while new proposals may lead to improvements in speed, fuel economy, and safety. Travel techniques must take into consideration the velocity changes necessary to travel from one body to another in the Solar System. For orbital flights, an additional adjustment must be made to match the orbital speed of the destination body. Other developments are designed to improve rocket launching and propulsion, as well as the use of non-traditional sources of energy. Using extraterrestrial resources for energy, oxygen, and water would reduce costs and improve life support systems.
Any crewed interplanetary flight must include certain design requirements. Life support systems must be capable of supporting human lives for extended periods of time. Preventative measures are needed to reduce exposure to radiation and ensure optimum reliability.
Remotely guided space probes have flown by all of the observed planets of the Solar System from Mercury to Neptune, with the New Horizons probe having flown by the dwarf planet Pluto and the Dawn spacecraft currently orbiting the dwarf planet Ceres. The most distant spacecraft, Voyager 1 and Voyager 2 have left the Solar System as of 8 December 2018 while Pioneer 10, Pioneer 11, and New Horizons are on course to leave it.
In general, planetary orbiters and landers return much more detailed and comprehensive information than fly-by missions. Space probes have been placed into orbit around all the five planets known to the ancients: The first being Venus (Venera 7, 1970), Mars (Mariner 9, 1971), Jupiter (Galileo, 1995), Saturn (Cassini/Huygens, 2004), and most recently Mercury (MESSENGER, March 2011), and have returned data about these bodies and their natural satellites.
The NEAR Shoemaker mission in 2000 orbited the large near-Earth asteroid 433 Eros, and was even successfully landed there, though it had not been designed with this maneuver in mind. The Japanese ion-drive spacecraft Hayabusa in 2005 also orbited the small near-Earth asteroid 25143 Itokawa, landing on it briefly and returning grains of its surface material to Earth. Another ion-drive mission, Dawn, has orbited the large asteroid Vesta (July 2011 – September 2012) and later moved on to the dwarf planet Ceres, arriving in March 2015.
Remotely controlled landers such as Viking, Pathfinder and the two Mars Exploration Rovers have landed on the surface of Mars and several Venera and Vega spacecraft have landed on the surface of Venus. The Huygens probe successfully landed on Saturn's moon, Titan.
No crewed missions have been sent to any planet of the Solar System. NASA's Apollo program, however, landed twelve people on the Moon and returned them to Earth. The American Vision for Space Exploration, originally introduced by U.S. President George W. Bush and put into practice through the Constellation program, had as a long-term goal to eventually send human astronauts to Mars. However, on February 1, 2010, President Barack Obama proposed cancelling the program in Fiscal Year 2011. An earlier project which received some significant planning by NASA included a crewed fly-by of Venus in the Manned Venus Flyby mission, but was cancelled when the Apollo Applications Program was terminated due to NASA budget cuts in the late 1960s.
The costs and risk of interplanetary travel receive a lot of publicity—spectacular examples include the malfunctions or complete failures of probes without a human crew, such as Mars 96, Deep Space 2, and Beagle 2 (the article List of Solar System probes gives a full list).
Many astronomers, geologists and biologists believe that exploration of the Solar System provides knowledge that could not be gained by observations from Earth's surface or from orbit around Earth. But they disagree about whether human-crewed missions make a useful scientific contribution—some think robotic probes are cheaper and safer, while others argue that either astronauts or spacefaring scientists, advised by Earth-based scientists, can respond more flexibly and intelligently to new or unexpected features of the region they are exploring.
Those who pay for such missions (primarily in the public sector) are more likely to be interested in benefits for themselves or for the human race as a whole. So far the only benefits of this type have been "spin-off" technologies which were developed for space missions and then were found to be at least as useful in other activities (NASA publicizes spin-offs from its activities).
Other practical motivations for interplanetary travel are more speculative, because our current technologies are not yet advanced enough to support test projects. But science fiction writers have a fairly good track record in predicting future technologies—for example geosynchronous communications satellites (Arthur C. Clarke) and many aspects of computer technology (Mack Reynolds).
Many science fiction stories feature detailed descriptions of how people could extract minerals from asteroids and energy from sources including orbital solar panels (unhampered by clouds) and the very strong magnetic field of Jupiter. Some point out that such techniques may be the only way to provide rising standards of living without being stopped by pollution or by depletion of Earth's resources (for example peak oil).
Finally, colonizing other parts of the Solar System would prevent the whole human species from being exterminated by any one of a number of possible events (see Human extinction). One of these possible events is an asteroid impact like the one which may have resulted in the Cretaceous–Paleogene extinction event. Although various Spaceguard projects monitor the Solar System for objects that might come dangerously close to Earth, current asteroid deflection strategies are crude and untested. To make the task more difficult, carbonaceous chondrites are rather sooty and therefore very hard to detect. Although carbonaceous chondrites are thought to be rare, some are very large and the suspected "dinosaur-killer" may have been a carbonaceous chondrite.
Some scientists, including members of the Space Studies Institute, argue that the vast majority of mankind eventually will live in space and will benefit from doing so.
One of the main challenges in interplanetary travel is producing the very large velocity changes necessary to travel from one body to another in the Solar System.
Due to the Sun's gravitational pull, a spacecraft moving farther from the Sun will slow down, while a spacecraft moving closer will speed up. Also, since any two planets are at different distances from the Sun, the planet from which the spacecraft starts is moving around the Sun at a different speed than the planet to which the spacecraft is travelling (in accordance with Kepler's Third Law). Because of these facts, a spacecraft desiring to transfer to a planet closer to the Sun must decrease its speed with respect to the Sun by a large amount in order to intercept it, while a spacecraft traveling to a planet farther out from the Sun must increase its speed substantially. Then, if additionally the spacecraft wishes to enter into orbit around the destination planet (instead of just flying by it), it must match the planet's orbital speed around the Sun, usually requiring another large velocity change.
Simply doing this by brute force – accelerating in the shortest route to the destination and then matching the planet's speed – would require an extremely large amount of fuel. And the fuel required for producing these velocity changes has to be launched along with the payload, and therefore even more fuel is needed to put both the spacecraft and the fuel required for its interplanetary journey into orbit. Thus, several techniques have been devised to reduce the fuel requirements of interplanetary travel.
As an example of the velocity changes involved, a spacecraft travelling from low Earth orbit to Mars using a simple trajectory must first undergo a change in speed (also known as a delta-v), in this case an increase, of about 3.8 km/s. Then, after intercepting Mars, it must change its speed by another 2.3 km/s in order to match Mars' orbital speed around the Sun and enter an orbit around it. For comparison, launching a spacecraft into low Earth orbit requires a change in speed of about 9.5 km/s.
For many years economical interplanetary travel meant using the Hohmann transfer orbit. Hohmann demonstrated that the lowest energy route between any two orbits is an elliptical "orbit" which forms a tangent to the starting and destination orbits. Once the spacecraft arrives, a second application of thrust will re-circularize the orbit at the new location. In the case of planetary transfers this means directing the spacecraft, originally in an orbit almost identical to Earth's, so that the aphelion of the transfer orbit is on the far side of the Sun near the orbit of the other planet. A spacecraft traveling from Earth to Mars via this method will arrive near Mars orbit in approximately 8.5 months, but because the orbital velocity is greater when closer to the center of mass (i.e. the Sun) and slower when farther from the center, the spacecraft will be traveling quite slowly and a small application of thrust is all that is needed to put it into a circular orbit around Mars. If the manoeuver is timed properly, Mars will be "arriving" under the spacecraft when this happens.
The Hohmann transfer applies to any two orbits, not just those with planets involved. For instance it is the most common way to transfer satellites into geostationary orbit, after first being "parked" in low Earth orbit. However, the Hohmann transfer takes an amount of time similar to ½ of the orbital period of the outer orbit, so in the case of the outer planets this is many years – too long to wait. It is also based on the assumption that the points at both ends are massless, as in the case when transferring between two orbits around Earth for instance. With a planet at the destination end of the transfer, calculations become considerably more difficult.
The gravitational slingshot technique uses the gravity of planets and moons to change the speed and direction of a spacecraft without using fuel. In typical example, a spacecraft is sent to a distant planet on a path that is much faster than what the Hohmann transfer would call for. This would typically mean that it would arrive at the planet's orbit and continue past it. However, if there is a planet between the departure point and the target, it can be used to bend the path toward the target, and in many cases the overall travel time is greatly reduced. A prime example of this are the two crafts of the Voyager program, which used slingshot effects to change trajectories several times in the outer Solar System. It is difficult to use this method for journeys in the inner part of the Solar System, although it is possible to use other nearby planets such as Venus or even the Moon as slingshots in journeys to the outer planets.
This maneuver can only change an object's velocity relative to a third, uninvolved object, – possibly the “centre of mass” or the Sun. There is no change in the velocities of the two objects involved in the maneuver relative to each other. The Sun cannot be used in a gravitational slingshot because it is stationary compared to rest of the Solar System, which orbits the Sun. It may be used to send a spaceship or probe into the galaxy because the Sun revolves around the center of the Milky Way.
A powered slingshot is the use of a rocket engine at or around closest approach to a body (periapsis). The use at this point multiplies up the effect of the delta-v, and gives a bigger effect than at other times.
Computers did not exist when Hohmann transfer orbits were first proposed (1925) and were slow, expensive and unreliable when gravitational slingshots were developed (1959). Recent advances in computing have made it possible to exploit many more features of the gravity fields of astronomical bodies and thus calculate even lower-cost trajectories. Paths have been calculated which link the Lagrange points of the various planets into the so-called Interplanetary Transport Network. Such "fuzzy orbits" use significantly less energy than Hohmann transfers but are much, much slower. They aren't practical for human crewed missions because they generally take years or decades, but may be useful for high-volume transport of low-value commodities if humanity develops a space-based economy.
Aerobraking uses the atmosphere of the target planet to slow down. It was first used on the Apollo program where the returning spacecraft did not enter Earth orbit but instead used a S-shaped vertical descent profile (starting with an initially steep descent, followed by a leveling out, followed by a slight climb, followed by a return to a positive rate of descent continuing to splash-down in the ocean) through Earth's atmosphere to reduce its speed until the parachute system could be deployed enabling a safe landing. Aerobraking does not require a thick atmosphere – for example most Mars landers use the technique, and Mars' atmosphere is only about 1% as thick as Earth's.
Aerobraking converts the spacecraft's kinetic energy into heat, so it requires a heatshield to prevent the craft from burning up. As a result, aerobraking is only helpful in cases where the fuel needed to transport the heatshield to the planet is less than the fuel that would be required to brake an unshielded craft by firing its engines. This can be addressed by creating heatshields from material available near the target.
Several technologies have been proposed which both save fuel and provide significantly faster travel than the traditional methodology of using Hohmann transfers. Some are still just theoretical, but over time, several of the theoretical approaches have been tested on spaceflight missions. For example, the Deep Space 1 mission was a successful test of an ion drive. These improved technologies typically focus on one or more of:
Besides making travel faster or cost less, such improvements could also allow greater design "safety margins" by reducing the imperative to make spacecraft lighter.
All rocket concepts are limited by the Tsiolkovsky rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, of initial (M0, including fuel) to final (M1, fuel depleted) mass. The main consequence is that mission velocities of more than a few times the velocity of the rocket motor exhaust (with respect to the vehicle) rapidly become impractical, as the dry mass (mass of payload and rocket without fuel) falls to below 10% of the entire rocket's wet mass (mass of rocket with fuel).
In a nuclear thermal rocket or solar thermal rocket a working fluid, usually hydrogen, is heated to a high temperature, and then expands through a rocket nozzle to create thrust. The energy replaces the chemical energy of the reactive chemicals in a traditional rocket engine. Due to the low molecular mass and hence high thermal velocity of hydrogen these engines are at least twice as fuel efficient as chemical engines, even after including the weight of the reactor.
The US Atomic Energy Commission and NASA tested a few designs from 1959 to 1968. The NASA designs were conceived as replacements for the upper stages of the Saturn V launch vehicle, but the tests revealed reliability problems, mainly caused by the vibration and heating involved in running the engines at such high thrust levels. Political and environmental considerations make it unlikely such an engine will be used in the foreseeable future, since nuclear thermal rockets would be most useful at or near the Earth's surface and the consequences of a malfunction could be disastrous. Fission-based thermal rocket concepts produce lower exhaust velocities than the electric and plasma concepts described below, and are therefore less attractive solutions. For applications requiring high thrust-to-weight ratio, such as planetary escape, nuclear thermal is potentially more attractive.
Electric propulsion systems use an external source such as a nuclear reactor or solar cells to generate electricity, which is then used to accelerate a chemically inert propellant to speeds far higher than achieved in a chemical rocket. Such drives produce feeble thrust, and are therefore unsuitable for quick maneuvers or for launching from the surface of a planet. But they are so economical in their use of reaction mass that they can keep firing continuously for days or weeks, while chemical rockets use up reaction mass so quickly that they can only fire for seconds or minutes. Even a trip to the Moon is long enough for an electric propulsion system to outrun a chemical rocket – the Apollo missions took 3 days in each direction.
NASA's Deep Space One was a very successful test of a prototype ion drive, which fired for a total of 678 days and enabled the probe to run down Comet Borrelly, a feat which would have been impossible for a chemical rocket. Dawn, the first NASA operational (i.e., non-technology demonstration) mission to use an ion drive for its primary propulsion, successfully orbited the large main-belt asteroids 1 Ceres and 4 Vesta. A more ambitious, nuclear-powered version was intended for a Jupiter mission without human crew, the Jupiter Icy Moons Orbiter (JIMO), originally planned for launch sometime in the next decade. Due to a shift in priorities at NASA that favored human crewed space missions, the project lost funding in 2005. A similar mission is currently under discussion as the US component of a joint NASA/ESA program for the exploration of Europa and Ganymede.
A NASA multi-center Technology Applications Assessment Team led from the Johnson Spaceflight Center, has as of January 2011 described "Nautilus-X", a concept study for a multi-mission space exploration vehicle useful for missions beyond low Earth orbit (LEO), of up to 24 months duration for a crew of up to six. Although Nautilus-X is adaptable to a variety of mission-specific propulsion units of various low-thrust, high specific impulse (Isp) designs, nuclear ion-electric drive is shown for illustrative purposes. It is intended for integration and checkout at the International Space Station (ISS), and would be suitable for deep-space missions from the ISS to and beyond the Moon, including Earth/Moon L1, Sun/Earth L2, near-Earth asteroidal, and Mars orbital destinations. It incorporates a reduced-g centrifuge providing artificial gravity for crew health to ameliorate the effects of long-term 0g exposure, and the capability to mitigate the space radiation environment.
The electric propulsion missions already flown, or currently scheduled, have used solar electric power, limiting their capability to operate far from the Sun, and also limiting their peak acceleration due to the mass of the electric power source. Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, can reach speeds much greater than chemically powered vehicles.
Fusion rockets, powered by nuclear fusion reactions, would "burn" such light element fuels as deuterium, tritium, or He. Because fusion yields about 1% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases only about 0.1% of the fuel's mass-energy. However, either fission or fusion technologies can in principle achieve velocities far higher than needed for Solar System exploration, and fusion energy still awaits practical demonstration on Earth.
One proposal using a fusion rocket was Project Daedalus. Another fairly detailed vehicle system, designed and optimized for crewed Solar System exploration, "Discovery II", based on the DHe reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%.
Fusion rockets are considered to be a likely source of interplanetary transport for a planetary civilization.
See the spacecraft propulsion article for a discussion of a number of other technologies that could, in the medium to longer term, be the basis of interplanetary missions. Unlike the situation with interstellar travel, the barriers to fast interplanetary travel involve engineering and economics rather than any basic physics.
Solar sails rely on the fact that light reflected from a surface exerts pressure on the surface. The radiation pressure is small and decreases by the square of the distance from the Sun, but unlike rockets, solar sails require no fuel. Although the thrust is small, it continues as long as the Sun shines and the sail is deployed.
The original concept relied only on radiation from the Sun – for example in Arthur C. Clarke's 1965 story "Sunjammer". More recent light sail designs propose to boost the thrust by aiming ground-based lasers or masers at the sail. Ground-based lasers or masers can also help a light-sail spacecraft to decelerate: the sail splits into an outer and inner section, the outer section is pushed forward and its shape is changed mechanically to focus reflected radiation on the inner portion, and the radiation focused on the inner section acts as a brake.
Although most articles about light sails focus on interstellar travel, there have been several proposals for their use within the Solar System.
Currently, the only spacecraft to use a solar sail as the main method of propulsion is IKAROS which was launched by JAXA on May 21, 2010. It has since been successfully deployed, and shown to be producing acceleration as expected. Many ordinary spacecraft and satellites also use solar collectors, temperature-control panels and Sun shades as light sails, to make minor corrections to their attitude and orbit without using fuel. A few have even had small purpose-built solar sails for this use (for example Eurostar E3000 geostationary communications satellites built by EADS Astrium).
It is possible to put stations or spacecraft on orbits that cycle between different planets, for example a Mars cycler would synchronously cycle between Mars and Earth, with very little propellant usage to maintain the trajectory. Cyclers are conceptually a good idea, because massive radiation shields, life support and other equipment only need to be put onto the cycler trajectory once. A cycler could combine several roles: habitat (for example it could spin to produce an "artificial gravity" effect); mothership (providing life support for the crews of smaller spacecraft which hitch a ride on it). Cyclers could also possibly make excellent cargo ships for resupply of a colony.
A space elevator is a theoretical structure that would transport material from a planet's surface into orbit. The idea is that, once the expensive job of building the elevator is complete, an indefinite number of loads can be transported into orbit at minimal cost. Even the simplest designs avoid the vicious circle of rocket launches from the surface, wherein the fuel needed to travel the last 10% of the distance into orbit must be lifted all the way from the surface, requiring even more fuel, and so on. More sophisticated space elevator designs reduce the energy cost per trip by using counterweights, and the most ambitious schemes aim to balance loads going up and down and thus make the energy cost close to zero. Space elevators have also sometimes been referred to as "beanstalks", "space bridges", "space lifts", "space ladders" and "orbital towers".
A terrestrial space elevator is beyond our current technology, although a lunar space elevator could theoretically be built using existing materials.
A skyhook is a theoretical class of orbiting tether propulsion intended to lift payloads to high altitudes and speeds. Proposals for skyhooks include designs that employ tethers spinning at hypersonic speed for catching high speed payloads or high altitude aircraft and placing them in orbit. In addition, it has been suggested that the rotating skyhook is "not engineeringly feasible using presently available materials".
The SpaceX Starship is designed to be fully and rapidly reusable, making use of the SpaceX reusable technology that was developed during 2011–2018 for Falcon 9 and Falcon Heavy launch vehicles.
SpaceX CEO Elon Musk estimates that the reusability capability alone, on both the launch vehicle and the spacecraft associated with the Starship will reduce overall system costs per tonne delivered to Mars by at least two orders of magnitude over what NASA had previously achieved.
When launching interplanetary probes from the surface of Earth, carrying all energy needed for the long-duration mission, payload quantities are necessarily extremely limited, due to the basis mass limitations described theoretically by the rocket equation. One alternative to transport more mass on interplanetary trajectories is to use up nearly all of the upper stage propellant on launch, and then refill propellants in Earth orbit before firing the rocket to escape velocity for a heliocentric trajectory. These propellants could be stored on orbit at a propellant depot, or carried to orbit in a propellant tanker to be directly transferred to the interplanetary spacecraft. For returning mass to Earth, a related option is to mine raw materials from a solar system celestial object, refine, process, and store the reaction products (propellant) on the Solar System body until such time as a vehicle needs to be loaded for launch.
As of 2019, SpaceX is developing a system in which a reusable first stage vehicle would transport a crewed interplanetary spacecraft to Earth orbit, detach, return to its launch pad where a tanker spacecraft would be mounted atop it, then both fueled, then launched again to rendezvous with the waiting crewed spacecraft. The tanker would then transfer its fuel to the human crewed spacecraft for use on its interplanetary voyage. The SpaceX Starship is a stainless steel-structure spacecraft propelled by six Raptor engines operating on densified methane/oxygen propellants. It is 55 m (180 ft)-long, 9 m (30 ft)-diameter at its widest point, and is capable of transporting up to 100 tonnes (220,000 lb) of cargo and passengers per trip to Mars, with on-orbit propellant refill before the interplanetary part of the journey.
As an example of a funded project currently under development, a key part of the system SpaceX has designed for Mars in order to radically decrease the cost of spaceflight to interplanetary destinations is the placement and operation of a physical plant on Mars to handle production and storage of the propellant components necessary to launch and fly the Starships back to Earth, or perhaps to increase the mass that can be transported onward to destinations in the outer Solar System.
The first Starship to Mars will carry a small propellant plant as a part of its cargo load. The plant will be expanded over multiple synods as more equipment arrives, is installed, and placed into mostly-autonomous production.
The SpaceX propellant plant will take advantage of the large supplies of carbon dioxide and water resources on Mars, mining the water (H2O) from subsurface ice and collecting CO2 from the atmosphere. A chemical plant will process the raw materials by means of electrolysis and the Sabatier process to produce oxygen (O2) and methane (CH4), and then liquefy it to facilitate long-term storage and ultimate use.
Current space vehicles attempt to launch with all their fuel (propellants and energy supplies) on board that they will need for their entire journey, and current space structures are lifted from the Earth's surface. Non-terrestrial sources of energy and materials are mostly a lot further away, but most would not require lifting out of a strong gravity field and therefore should be much cheaper to use in space in the long term.
The most important non-terrestrial resource is energy, because it can be used to transform non-terrestrial materials into useful forms (some of which may also produce energy). At least two fundamental non-terrestrial energy sources have been proposed: solar-powered energy generation (unhampered by clouds), either directly by solar cells or indirectly by focusing solar radiation on boilers which produce steam to drive generators; and electrodynamic tethers which generate electricity from the powerful magnetic fields of some planets (Jupiter has a very powerful magnetic field).
Water ice would be very useful and is widespread on the moons of Jupiter and Saturn:
Oxygen is a common constituent of the Moon's crust, and is probably abundant in most other bodies in the Solar System. Non-terrestrial oxygen would be valuable as a source of water ice only if an adequate source of hydrogen can be found. Possible uses include:
Unfortunately hydrogen, along with other volatiles like carbon and nitrogen, are much less abundant than oxygen in the inner Solar System.
Scientists expect to find a vast range of organic compounds in some of the planets, moons and comets of the outer Solar System, and the range of possible uses is even wider. For example, methane can be used as a fuel (burned with non-terrestrial oxygen), or as a feedstock for petrochemical processes such as making plastics. And ammonia could be a valuable feedstock for producing fertilizers to be used in the vegetable gardens of orbital and planetary bases, reducing the need to lift food to them from Earth.
Even unprocessed rock may be useful as rocket propellant if mass drivers are employed.
Life support systems must be capable of supporting human life for weeks, months or even years. A breathable atmosphere of at least 35 kPa (5.1 psi) must be maintained, with adequate amounts of oxygen, nitrogen, and controlled levels of carbon dioxide, trace gases and water vapor.
In October 2015, the NASA Office of Inspector General issued a health hazards report related to human spaceflight, including a human mission to Mars.
Once a vehicle leaves low Earth orbit and the protection of Earth's magnetosphere, it enters the Van Allen radiation belt, a region of high radiation. Beyond the Van Allen belts, radiation levels generally decrease, but can fluctuate over time. These high energy cosmic rays pose a health threat. Even the minimum levels of radiation during these fluctuations is comparable to the current annual limit for astronauts in low-Earth orbit.
Scientists of Russian Academy of Sciences are searching for methods of reducing the risk of radiation-induced cancer in preparation for the mission to Mars. They consider as one of the options a life support system generating drinking water with low content of deuterium (a stable isotope of hydrogen) to be consumed by the crew members. Preliminary investigations have shown that deuterium-depleted water features certain anti-cancer effects. Hence, deuterium-free drinking water is considered to have the potential of lowering the risk of cancer caused by extreme radiation exposure of the Martian crew.
In addition, coronal mass ejections from the Sun are highly dangerous, and are fatal within a very short timescale to humans unless they are protected by massive shielding.
Any major failure to a spacecraft en route is likely to be fatal, and even a minor one could have dangerous results if not repaired quickly, something difficult to accomplish in open space. The crew of the Apollo 13 mission survived despite an explosion caused by a faulty oxygen tank (1970).
For astrodynamics reasons, economic spacecraft travel to other planets is only practical within certain time windows. Outside these windows the planets are essentially inaccessible from Earth with current technology. This constrains flights and limits rescue options in the case of an emergency. | [
{
"paragraph_id": 0,
"text": "Interplanetary spaceflight or interplanetary travel is the crewed or uncrewed travel between stars and planets, usually within a single planetary system. In practice, spaceflights of this type are confined to travel between the planets of the Solar System. Uncrewed space probes have flown to all the observed planets in the Solar System as well as to dwarf planets Pluto and Ceres, and several asteroids. Orbiters and landers return more information than fly-by missions. Crewed flights have landed on the Moon and have been planned, from time to time, for Mars, Venus and Mercury. While many scientists appreciate the knowledge value that uncrewed flights provide, the value of crewed missions is more controversial. Science fiction writers propose a number of benefits, including the mining of asteroids, access to solar power, and room for colonization in the event of an Earth catastrophe.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A number of techniques have been developed to make interplanetary flights more economical. Advances in computing and theoretical science have already improved some techniques, while new proposals may lead to improvements in speed, fuel economy, and safety. Travel techniques must take into consideration the velocity changes necessary to travel from one body to another in the Solar System. For orbital flights, an additional adjustment must be made to match the orbital speed of the destination body. Other developments are designed to improve rocket launching and propulsion, as well as the use of non-traditional sources of energy. Using extraterrestrial resources for energy, oxygen, and water would reduce costs and improve life support systems.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Any crewed interplanetary flight must include certain design requirements. Life support systems must be capable of supporting human lives for extended periods of time. Preventative measures are needed to reduce exposure to radiation and ensure optimum reliability.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Remotely guided space probes have flown by all of the observed planets of the Solar System from Mercury to Neptune, with the New Horizons probe having flown by the dwarf planet Pluto and the Dawn spacecraft currently orbiting the dwarf planet Ceres. The most distant spacecraft, Voyager 1 and Voyager 2 have left the Solar System as of 8 December 2018 while Pioneer 10, Pioneer 11, and New Horizons are on course to leave it.",
"title": "Current achievements in interplanetary travel"
},
{
"paragraph_id": 4,
"text": "In general, planetary orbiters and landers return much more detailed and comprehensive information than fly-by missions. Space probes have been placed into orbit around all the five planets known to the ancients: The first being Venus (Venera 7, 1970), Mars (Mariner 9, 1971), Jupiter (Galileo, 1995), Saturn (Cassini/Huygens, 2004), and most recently Mercury (MESSENGER, March 2011), and have returned data about these bodies and their natural satellites.",
"title": "Current achievements in interplanetary travel"
},
{
"paragraph_id": 5,
"text": "The NEAR Shoemaker mission in 2000 orbited the large near-Earth asteroid 433 Eros, and was even successfully landed there, though it had not been designed with this maneuver in mind. The Japanese ion-drive spacecraft Hayabusa in 2005 also orbited the small near-Earth asteroid 25143 Itokawa, landing on it briefly and returning grains of its surface material to Earth. Another ion-drive mission, Dawn, has orbited the large asteroid Vesta (July 2011 – September 2012) and later moved on to the dwarf planet Ceres, arriving in March 2015.",
"title": "Current achievements in interplanetary travel"
},
{
"paragraph_id": 6,
"text": "Remotely controlled landers such as Viking, Pathfinder and the two Mars Exploration Rovers have landed on the surface of Mars and several Venera and Vega spacecraft have landed on the surface of Venus. The Huygens probe successfully landed on Saturn's moon, Titan.",
"title": "Current achievements in interplanetary travel"
},
{
"paragraph_id": 7,
"text": "No crewed missions have been sent to any planet of the Solar System. NASA's Apollo program, however, landed twelve people on the Moon and returned them to Earth. The American Vision for Space Exploration, originally introduced by U.S. President George W. Bush and put into practice through the Constellation program, had as a long-term goal to eventually send human astronauts to Mars. However, on February 1, 2010, President Barack Obama proposed cancelling the program in Fiscal Year 2011. An earlier project which received some significant planning by NASA included a crewed fly-by of Venus in the Manned Venus Flyby mission, but was cancelled when the Apollo Applications Program was terminated due to NASA budget cuts in the late 1960s.",
"title": "Current achievements in interplanetary travel"
},
{
"paragraph_id": 8,
"text": "The costs and risk of interplanetary travel receive a lot of publicity—spectacular examples include the malfunctions or complete failures of probes without a human crew, such as Mars 96, Deep Space 2, and Beagle 2 (the article List of Solar System probes gives a full list).",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 9,
"text": "Many astronomers, geologists and biologists believe that exploration of the Solar System provides knowledge that could not be gained by observations from Earth's surface or from orbit around Earth. But they disagree about whether human-crewed missions make a useful scientific contribution—some think robotic probes are cheaper and safer, while others argue that either astronauts or spacefaring scientists, advised by Earth-based scientists, can respond more flexibly and intelligently to new or unexpected features of the region they are exploring.",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 10,
"text": "Those who pay for such missions (primarily in the public sector) are more likely to be interested in benefits for themselves or for the human race as a whole. So far the only benefits of this type have been \"spin-off\" technologies which were developed for space missions and then were found to be at least as useful in other activities (NASA publicizes spin-offs from its activities).",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 11,
"text": "Other practical motivations for interplanetary travel are more speculative, because our current technologies are not yet advanced enough to support test projects. But science fiction writers have a fairly good track record in predicting future technologies—for example geosynchronous communications satellites (Arthur C. Clarke) and many aspects of computer technology (Mack Reynolds).",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 12,
"text": "Many science fiction stories feature detailed descriptions of how people could extract minerals from asteroids and energy from sources including orbital solar panels (unhampered by clouds) and the very strong magnetic field of Jupiter. Some point out that such techniques may be the only way to provide rising standards of living without being stopped by pollution or by depletion of Earth's resources (for example peak oil).",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 13,
"text": "Finally, colonizing other parts of the Solar System would prevent the whole human species from being exterminated by any one of a number of possible events (see Human extinction). One of these possible events is an asteroid impact like the one which may have resulted in the Cretaceous–Paleogene extinction event. Although various Spaceguard projects monitor the Solar System for objects that might come dangerously close to Earth, current asteroid deflection strategies are crude and untested. To make the task more difficult, carbonaceous chondrites are rather sooty and therefore very hard to detect. Although carbonaceous chondrites are thought to be rare, some are very large and the suspected \"dinosaur-killer\" may have been a carbonaceous chondrite.",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 14,
"text": "Some scientists, including members of the Space Studies Institute, argue that the vast majority of mankind eventually will live in space and will benefit from doing so.",
"title": "Reasons for interplanetary travel"
},
{
"paragraph_id": 15,
"text": "One of the main challenges in interplanetary travel is producing the very large velocity changes necessary to travel from one body to another in the Solar System.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 16,
"text": "Due to the Sun's gravitational pull, a spacecraft moving farther from the Sun will slow down, while a spacecraft moving closer will speed up. Also, since any two planets are at different distances from the Sun, the planet from which the spacecraft starts is moving around the Sun at a different speed than the planet to which the spacecraft is travelling (in accordance with Kepler's Third Law). Because of these facts, a spacecraft desiring to transfer to a planet closer to the Sun must decrease its speed with respect to the Sun by a large amount in order to intercept it, while a spacecraft traveling to a planet farther out from the Sun must increase its speed substantially. Then, if additionally the spacecraft wishes to enter into orbit around the destination planet (instead of just flying by it), it must match the planet's orbital speed around the Sun, usually requiring another large velocity change.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 17,
"text": "Simply doing this by brute force – accelerating in the shortest route to the destination and then matching the planet's speed – would require an extremely large amount of fuel. And the fuel required for producing these velocity changes has to be launched along with the payload, and therefore even more fuel is needed to put both the spacecraft and the fuel required for its interplanetary journey into orbit. Thus, several techniques have been devised to reduce the fuel requirements of interplanetary travel.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 18,
"text": "As an example of the velocity changes involved, a spacecraft travelling from low Earth orbit to Mars using a simple trajectory must first undergo a change in speed (also known as a delta-v), in this case an increase, of about 3.8 km/s. Then, after intercepting Mars, it must change its speed by another 2.3 km/s in order to match Mars' orbital speed around the Sun and enter an orbit around it. For comparison, launching a spacecraft into low Earth orbit requires a change in speed of about 9.5 km/s.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 19,
"text": "For many years economical interplanetary travel meant using the Hohmann transfer orbit. Hohmann demonstrated that the lowest energy route between any two orbits is an elliptical \"orbit\" which forms a tangent to the starting and destination orbits. Once the spacecraft arrives, a second application of thrust will re-circularize the orbit at the new location. In the case of planetary transfers this means directing the spacecraft, originally in an orbit almost identical to Earth's, so that the aphelion of the transfer orbit is on the far side of the Sun near the orbit of the other planet. A spacecraft traveling from Earth to Mars via this method will arrive near Mars orbit in approximately 8.5 months, but because the orbital velocity is greater when closer to the center of mass (i.e. the Sun) and slower when farther from the center, the spacecraft will be traveling quite slowly and a small application of thrust is all that is needed to put it into a circular orbit around Mars. If the manoeuver is timed properly, Mars will be \"arriving\" under the spacecraft when this happens.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 20,
"text": "The Hohmann transfer applies to any two orbits, not just those with planets involved. For instance it is the most common way to transfer satellites into geostationary orbit, after first being \"parked\" in low Earth orbit. However, the Hohmann transfer takes an amount of time similar to ½ of the orbital period of the outer orbit, so in the case of the outer planets this is many years – too long to wait. It is also based on the assumption that the points at both ends are massless, as in the case when transferring between two orbits around Earth for instance. With a planet at the destination end of the transfer, calculations become considerably more difficult.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 21,
"text": "The gravitational slingshot technique uses the gravity of planets and moons to change the speed and direction of a spacecraft without using fuel. In typical example, a spacecraft is sent to a distant planet on a path that is much faster than what the Hohmann transfer would call for. This would typically mean that it would arrive at the planet's orbit and continue past it. However, if there is a planet between the departure point and the target, it can be used to bend the path toward the target, and in many cases the overall travel time is greatly reduced. A prime example of this are the two crafts of the Voyager program, which used slingshot effects to change trajectories several times in the outer Solar System. It is difficult to use this method for journeys in the inner part of the Solar System, although it is possible to use other nearby planets such as Venus or even the Moon as slingshots in journeys to the outer planets.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 22,
"text": "This maneuver can only change an object's velocity relative to a third, uninvolved object, – possibly the “centre of mass” or the Sun. There is no change in the velocities of the two objects involved in the maneuver relative to each other. The Sun cannot be used in a gravitational slingshot because it is stationary compared to rest of the Solar System, which orbits the Sun. It may be used to send a spaceship or probe into the galaxy because the Sun revolves around the center of the Milky Way.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 23,
"text": "A powered slingshot is the use of a rocket engine at or around closest approach to a body (periapsis). The use at this point multiplies up the effect of the delta-v, and gives a bigger effect than at other times.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 24,
"text": "Computers did not exist when Hohmann transfer orbits were first proposed (1925) and were slow, expensive and unreliable when gravitational slingshots were developed (1959). Recent advances in computing have made it possible to exploit many more features of the gravity fields of astronomical bodies and thus calculate even lower-cost trajectories. Paths have been calculated which link the Lagrange points of the various planets into the so-called Interplanetary Transport Network. Such \"fuzzy orbits\" use significantly less energy than Hohmann transfers but are much, much slower. They aren't practical for human crewed missions because they generally take years or decades, but may be useful for high-volume transport of low-value commodities if humanity develops a space-based economy.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 25,
"text": "Aerobraking uses the atmosphere of the target planet to slow down. It was first used on the Apollo program where the returning spacecraft did not enter Earth orbit but instead used a S-shaped vertical descent profile (starting with an initially steep descent, followed by a leveling out, followed by a slight climb, followed by a return to a positive rate of descent continuing to splash-down in the ocean) through Earth's atmosphere to reduce its speed until the parachute system could be deployed enabling a safe landing. Aerobraking does not require a thick atmosphere – for example most Mars landers use the technique, and Mars' atmosphere is only about 1% as thick as Earth's.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 26,
"text": "Aerobraking converts the spacecraft's kinetic energy into heat, so it requires a heatshield to prevent the craft from burning up. As a result, aerobraking is only helpful in cases where the fuel needed to transport the heatshield to the planet is less than the fuel that would be required to brake an unshielded craft by firing its engines. This can be addressed by creating heatshields from material available near the target.",
"title": "Economical travel techniques"
},
{
"paragraph_id": 27,
"text": "Several technologies have been proposed which both save fuel and provide significantly faster travel than the traditional methodology of using Hohmann transfers. Some are still just theoretical, but over time, several of the theoretical approaches have been tested on spaceflight missions. For example, the Deep Space 1 mission was a successful test of an ion drive. These improved technologies typically focus on one or more of:",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 28,
"text": "Besides making travel faster or cost less, such improvements could also allow greater design \"safety margins\" by reducing the imperative to make spacecraft lighter.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 29,
"text": "All rocket concepts are limited by the Tsiolkovsky rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, of initial (M0, including fuel) to final (M1, fuel depleted) mass. The main consequence is that mission velocities of more than a few times the velocity of the rocket motor exhaust (with respect to the vehicle) rapidly become impractical, as the dry mass (mass of payload and rocket without fuel) falls to below 10% of the entire rocket's wet mass (mass of rocket with fuel).",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 30,
"text": "In a nuclear thermal rocket or solar thermal rocket a working fluid, usually hydrogen, is heated to a high temperature, and then expands through a rocket nozzle to create thrust. The energy replaces the chemical energy of the reactive chemicals in a traditional rocket engine. Due to the low molecular mass and hence high thermal velocity of hydrogen these engines are at least twice as fuel efficient as chemical engines, even after including the weight of the reactor.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 31,
"text": "The US Atomic Energy Commission and NASA tested a few designs from 1959 to 1968. The NASA designs were conceived as replacements for the upper stages of the Saturn V launch vehicle, but the tests revealed reliability problems, mainly caused by the vibration and heating involved in running the engines at such high thrust levels. Political and environmental considerations make it unlikely such an engine will be used in the foreseeable future, since nuclear thermal rockets would be most useful at or near the Earth's surface and the consequences of a malfunction could be disastrous. Fission-based thermal rocket concepts produce lower exhaust velocities than the electric and plasma concepts described below, and are therefore less attractive solutions. For applications requiring high thrust-to-weight ratio, such as planetary escape, nuclear thermal is potentially more attractive.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 32,
"text": "Electric propulsion systems use an external source such as a nuclear reactor or solar cells to generate electricity, which is then used to accelerate a chemically inert propellant to speeds far higher than achieved in a chemical rocket. Such drives produce feeble thrust, and are therefore unsuitable for quick maneuvers or for launching from the surface of a planet. But they are so economical in their use of reaction mass that they can keep firing continuously for days or weeks, while chemical rockets use up reaction mass so quickly that they can only fire for seconds or minutes. Even a trip to the Moon is long enough for an electric propulsion system to outrun a chemical rocket – the Apollo missions took 3 days in each direction.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 33,
"text": "NASA's Deep Space One was a very successful test of a prototype ion drive, which fired for a total of 678 days and enabled the probe to run down Comet Borrelly, a feat which would have been impossible for a chemical rocket. Dawn, the first NASA operational (i.e., non-technology demonstration) mission to use an ion drive for its primary propulsion, successfully orbited the large main-belt asteroids 1 Ceres and 4 Vesta. A more ambitious, nuclear-powered version was intended for a Jupiter mission without human crew, the Jupiter Icy Moons Orbiter (JIMO), originally planned for launch sometime in the next decade. Due to a shift in priorities at NASA that favored human crewed space missions, the project lost funding in 2005. A similar mission is currently under discussion as the US component of a joint NASA/ESA program for the exploration of Europa and Ganymede.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 34,
"text": "A NASA multi-center Technology Applications Assessment Team led from the Johnson Spaceflight Center, has as of January 2011 described \"Nautilus-X\", a concept study for a multi-mission space exploration vehicle useful for missions beyond low Earth orbit (LEO), of up to 24 months duration for a crew of up to six. Although Nautilus-X is adaptable to a variety of mission-specific propulsion units of various low-thrust, high specific impulse (Isp) designs, nuclear ion-electric drive is shown for illustrative purposes. It is intended for integration and checkout at the International Space Station (ISS), and would be suitable for deep-space missions from the ISS to and beyond the Moon, including Earth/Moon L1, Sun/Earth L2, near-Earth asteroidal, and Mars orbital destinations. It incorporates a reduced-g centrifuge providing artificial gravity for crew health to ameliorate the effects of long-term 0g exposure, and the capability to mitigate the space radiation environment.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 35,
"text": "The electric propulsion missions already flown, or currently scheduled, have used solar electric power, limiting their capability to operate far from the Sun, and also limiting their peak acceleration due to the mass of the electric power source. Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, can reach speeds much greater than chemically powered vehicles.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 36,
"text": "Fusion rockets, powered by nuclear fusion reactions, would \"burn\" such light element fuels as deuterium, tritium, or He. Because fusion yields about 1% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases only about 0.1% of the fuel's mass-energy. However, either fission or fusion technologies can in principle achieve velocities far higher than needed for Solar System exploration, and fusion energy still awaits practical demonstration on Earth.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 37,
"text": "One proposal using a fusion rocket was Project Daedalus. Another fairly detailed vehicle system, designed and optimized for crewed Solar System exploration, \"Discovery II\", based on the DHe reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 38,
"text": "Fusion rockets are considered to be a likely source of interplanetary transport for a planetary civilization.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 39,
"text": "See the spacecraft propulsion article for a discussion of a number of other technologies that could, in the medium to longer term, be the basis of interplanetary missions. Unlike the situation with interstellar travel, the barriers to fast interplanetary travel involve engineering and economics rather than any basic physics.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 40,
"text": "Solar sails rely on the fact that light reflected from a surface exerts pressure on the surface. The radiation pressure is small and decreases by the square of the distance from the Sun, but unlike rockets, solar sails require no fuel. Although the thrust is small, it continues as long as the Sun shines and the sail is deployed.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 41,
"text": "The original concept relied only on radiation from the Sun – for example in Arthur C. Clarke's 1965 story \"Sunjammer\". More recent light sail designs propose to boost the thrust by aiming ground-based lasers or masers at the sail. Ground-based lasers or masers can also help a light-sail spacecraft to decelerate: the sail splits into an outer and inner section, the outer section is pushed forward and its shape is changed mechanically to focus reflected radiation on the inner portion, and the radiation focused on the inner section acts as a brake.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 42,
"text": "Although most articles about light sails focus on interstellar travel, there have been several proposals for their use within the Solar System.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 43,
"text": "Currently, the only spacecraft to use a solar sail as the main method of propulsion is IKAROS which was launched by JAXA on May 21, 2010. It has since been successfully deployed, and shown to be producing acceleration as expected. Many ordinary spacecraft and satellites also use solar collectors, temperature-control panels and Sun shades as light sails, to make minor corrections to their attitude and orbit without using fuel. A few have even had small purpose-built solar sails for this use (for example Eurostar E3000 geostationary communications satellites built by EADS Astrium).",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 44,
"text": "It is possible to put stations or spacecraft on orbits that cycle between different planets, for example a Mars cycler would synchronously cycle between Mars and Earth, with very little propellant usage to maintain the trajectory. Cyclers are conceptually a good idea, because massive radiation shields, life support and other equipment only need to be put onto the cycler trajectory once. A cycler could combine several roles: habitat (for example it could spin to produce an \"artificial gravity\" effect); mothership (providing life support for the crews of smaller spacecraft which hitch a ride on it). Cyclers could also possibly make excellent cargo ships for resupply of a colony.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 45,
"text": "A space elevator is a theoretical structure that would transport material from a planet's surface into orbit. The idea is that, once the expensive job of building the elevator is complete, an indefinite number of loads can be transported into orbit at minimal cost. Even the simplest designs avoid the vicious circle of rocket launches from the surface, wherein the fuel needed to travel the last 10% of the distance into orbit must be lifted all the way from the surface, requiring even more fuel, and so on. More sophisticated space elevator designs reduce the energy cost per trip by using counterweights, and the most ambitious schemes aim to balance loads going up and down and thus make the energy cost close to zero. Space elevators have also sometimes been referred to as \"beanstalks\", \"space bridges\", \"space lifts\", \"space ladders\" and \"orbital towers\".",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 46,
"text": "A terrestrial space elevator is beyond our current technology, although a lunar space elevator could theoretically be built using existing materials.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 47,
"text": "A skyhook is a theoretical class of orbiting tether propulsion intended to lift payloads to high altitudes and speeds. Proposals for skyhooks include designs that employ tethers spinning at hypersonic speed for catching high speed payloads or high altitude aircraft and placing them in orbit. In addition, it has been suggested that the rotating skyhook is \"not engineeringly feasible using presently available materials\".",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 48,
"text": "The SpaceX Starship is designed to be fully and rapidly reusable, making use of the SpaceX reusable technology that was developed during 2011–2018 for Falcon 9 and Falcon Heavy launch vehicles.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 49,
"text": "SpaceX CEO Elon Musk estimates that the reusability capability alone, on both the launch vehicle and the spacecraft associated with the Starship will reduce overall system costs per tonne delivered to Mars by at least two orders of magnitude over what NASA had previously achieved.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 50,
"text": "When launching interplanetary probes from the surface of Earth, carrying all energy needed for the long-duration mission, payload quantities are necessarily extremely limited, due to the basis mass limitations described theoretically by the rocket equation. One alternative to transport more mass on interplanetary trajectories is to use up nearly all of the upper stage propellant on launch, and then refill propellants in Earth orbit before firing the rocket to escape velocity for a heliocentric trajectory. These propellants could be stored on orbit at a propellant depot, or carried to orbit in a propellant tanker to be directly transferred to the interplanetary spacecraft. For returning mass to Earth, a related option is to mine raw materials from a solar system celestial object, refine, process, and store the reaction products (propellant) on the Solar System body until such time as a vehicle needs to be loaded for launch.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 51,
"text": "As of 2019, SpaceX is developing a system in which a reusable first stage vehicle would transport a crewed interplanetary spacecraft to Earth orbit, detach, return to its launch pad where a tanker spacecraft would be mounted atop it, then both fueled, then launched again to rendezvous with the waiting crewed spacecraft. The tanker would then transfer its fuel to the human crewed spacecraft for use on its interplanetary voyage. The SpaceX Starship is a stainless steel-structure spacecraft propelled by six Raptor engines operating on densified methane/oxygen propellants. It is 55 m (180 ft)-long, 9 m (30 ft)-diameter at its widest point, and is capable of transporting up to 100 tonnes (220,000 lb) of cargo and passengers per trip to Mars, with on-orbit propellant refill before the interplanetary part of the journey.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 52,
"text": "As an example of a funded project currently under development, a key part of the system SpaceX has designed for Mars in order to radically decrease the cost of spaceflight to interplanetary destinations is the placement and operation of a physical plant on Mars to handle production and storage of the propellant components necessary to launch and fly the Starships back to Earth, or perhaps to increase the mass that can be transported onward to destinations in the outer Solar System.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 53,
"text": "The first Starship to Mars will carry a small propellant plant as a part of its cargo load. The plant will be expanded over multiple synods as more equipment arrives, is installed, and placed into mostly-autonomous production.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 54,
"text": "The SpaceX propellant plant will take advantage of the large supplies of carbon dioxide and water resources on Mars, mining the water (H2O) from subsurface ice and collecting CO2 from the atmosphere. A chemical plant will process the raw materials by means of electrolysis and the Sabatier process to produce oxygen (O2) and methane (CH4), and then liquefy it to facilitate long-term storage and ultimate use.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 55,
"text": "Current space vehicles attempt to launch with all their fuel (propellants and energy supplies) on board that they will need for their entire journey, and current space structures are lifted from the Earth's surface. Non-terrestrial sources of energy and materials are mostly a lot further away, but most would not require lifting out of a strong gravity field and therefore should be much cheaper to use in space in the long term.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 56,
"text": "The most important non-terrestrial resource is energy, because it can be used to transform non-terrestrial materials into useful forms (some of which may also produce energy). At least two fundamental non-terrestrial energy sources have been proposed: solar-powered energy generation (unhampered by clouds), either directly by solar cells or indirectly by focusing solar radiation on boilers which produce steam to drive generators; and electrodynamic tethers which generate electricity from the powerful magnetic fields of some planets (Jupiter has a very powerful magnetic field).",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 57,
"text": "Water ice would be very useful and is widespread on the moons of Jupiter and Saturn:",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 58,
"text": "Oxygen is a common constituent of the Moon's crust, and is probably abundant in most other bodies in the Solar System. Non-terrestrial oxygen would be valuable as a source of water ice only if an adequate source of hydrogen can be found. Possible uses include:",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 59,
"text": "Unfortunately hydrogen, along with other volatiles like carbon and nitrogen, are much less abundant than oxygen in the inner Solar System.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 60,
"text": "Scientists expect to find a vast range of organic compounds in some of the planets, moons and comets of the outer Solar System, and the range of possible uses is even wider. For example, methane can be used as a fuel (burned with non-terrestrial oxygen), or as a feedstock for petrochemical processes such as making plastics. And ammonia could be a valuable feedstock for producing fertilizers to be used in the vegetable gardens of orbital and planetary bases, reducing the need to lift food to them from Earth.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 61,
"text": "Even unprocessed rock may be useful as rocket propellant if mass drivers are employed.",
"title": "Improved technologies and methodologies"
},
{
"paragraph_id": 62,
"text": "Life support systems must be capable of supporting human life for weeks, months or even years. A breathable atmosphere of at least 35 kPa (5.1 psi) must be maintained, with adequate amounts of oxygen, nitrogen, and controlled levels of carbon dioxide, trace gases and water vapor.",
"title": "Design requirements for crewed interplanetary travel"
},
{
"paragraph_id": 63,
"text": "In October 2015, the NASA Office of Inspector General issued a health hazards report related to human spaceflight, including a human mission to Mars.",
"title": "Design requirements for crewed interplanetary travel"
},
{
"paragraph_id": 64,
"text": "Once a vehicle leaves low Earth orbit and the protection of Earth's magnetosphere, it enters the Van Allen radiation belt, a region of high radiation. Beyond the Van Allen belts, radiation levels generally decrease, but can fluctuate over time. These high energy cosmic rays pose a health threat. Even the minimum levels of radiation during these fluctuations is comparable to the current annual limit for astronauts in low-Earth orbit.",
"title": "Design requirements for crewed interplanetary travel"
},
{
"paragraph_id": 65,
"text": "Scientists of Russian Academy of Sciences are searching for methods of reducing the risk of radiation-induced cancer in preparation for the mission to Mars. They consider as one of the options a life support system generating drinking water with low content of deuterium (a stable isotope of hydrogen) to be consumed by the crew members. Preliminary investigations have shown that deuterium-depleted water features certain anti-cancer effects. Hence, deuterium-free drinking water is considered to have the potential of lowering the risk of cancer caused by extreme radiation exposure of the Martian crew.",
"title": "Design requirements for crewed interplanetary travel"
},
{
"paragraph_id": 66,
"text": "In addition, coronal mass ejections from the Sun are highly dangerous, and are fatal within a very short timescale to humans unless they are protected by massive shielding.",
"title": "Design requirements for crewed interplanetary travel"
},
{
"paragraph_id": 67,
"text": "Any major failure to a spacecraft en route is likely to be fatal, and even a minor one could have dangerous results if not repaired quickly, something difficult to accomplish in open space. The crew of the Apollo 13 mission survived despite an explosion caused by a faulty oxygen tank (1970).",
"title": "Design requirements for crewed interplanetary travel"
},
{
"paragraph_id": 68,
"text": "For astrodynamics reasons, economic spacecraft travel to other planets is only practical within certain time windows. Outside these windows the planets are essentially inaccessible from Earth with current technology. This constrains flights and limits rescue options in the case of an emergency.",
"title": "Design requirements for crewed interplanetary travel"
}
]
| Interplanetary spaceflight or interplanetary travel is the crewed or uncrewed travel between stars and planets, usually within a single planetary system. In practice, spaceflights of this type are confined to travel between the planets of the Solar System. Uncrewed space probes have flown to all the observed planets in the Solar System as well as to dwarf planets Pluto and Ceres, and several asteroids. Orbiters and landers return more information than fly-by missions. Crewed flights have landed on the Moon and have been planned, from time to time, for Mars, Venus and Mercury. While many scientists appreciate the knowledge value that uncrewed flights provide, the value of crewed missions is more controversial. Science fiction writers propose a number of benefits, including the mining of asteroids, access to solar power, and room for colonization in the event of an Earth catastrophe. A number of techniques have been developed to make interplanetary flights more economical. Advances in computing and theoretical science have already improved some techniques, while new proposals may lead to improvements in speed, fuel economy, and safety. Travel techniques must take into consideration the velocity changes necessary to travel from one body to another in the Solar System. For orbital flights, an additional adjustment must be made to match the orbital speed of the destination body. Other developments are designed to improve rocket launching and propulsion, as well as the use of non-traditional sources of energy. Using extraterrestrial resources for energy, oxygen, and water would reduce costs and improve life support systems. Any crewed interplanetary flight must include certain design requirements. Life support systems must be capable of supporting human lives for extended periods of time. Preventative measures are needed to reduce exposure to radiation and ensure optimum reliability. | 2001-10-03T19:26:40Z | 2023-10-24T16:05:18Z | [
"Template:Cite conference",
"Template:Spaceflight sidebar",
"Template:Citation needed",
"Template:Convert",
"Template:Clarify",
"Template:Div col",
"Template:Div col end",
"Template:Cite magazine",
"Template:Spaceflight",
"Template:Main",
"Template:'",
"Template:Cvt",
"Template:When",
"Template:Cite web",
"Template:Cite journal",
"Template:Cite book",
"Template:Webarchive",
"Template:Cbignore",
"Template:Authority control",
"Template:Short description",
"Template:Annotated link",
"Template:Reflist",
"Template:Dead link",
"Template:Cite news",
"Template:Cite AV media",
"Template:Space colonization"
]
| https://en.wikipedia.org/wiki/Interplanetary_spaceflight |
15,112 | Wave interference | In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves.
The word interference is derived from the Latin words inter which means "between" and fere which means "hit or strike", and was coined by Thomas Young in 1801.
The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference energy is stored in the elasticity of the medium. For example when we drop 2 pebbles in a pond we see a pattern but eventually waves continue and only when they reach the shore is the energy absorbed away from the medium.
Constructive interference occurs when the phase difference between the waves is an even multiple of π (180°), whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values.
Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can for example in water. Superposition in the EM field is an assumed and necessary requirement, fundamentally 2 light beam pass through each other and continue on their respective paths. Light can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave–particle duality of light which is due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle however an explanation based on the Feynman path integral exists which takes into account quantum mechanical considerations.
The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is
where A {\displaystyle A} is the peak amplitude, k = 2 π / λ {\displaystyle k=2\pi /\lambda } is the wavenumber and ω = 2 π f {\displaystyle \omega =2\pi f} is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right
where φ {\displaystyle \varphi } is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is
Using the trigonometric identity for the sum of two cosines: cos a + cos b = 2 cos ( a − b 2 ) cos ( a + b 2 ) , {\textstyle \cos a+\cos b=2\cos \left({a-b \over 2}\right)\cos \left({a+b \over 2}\right),} this can be written
This represents a wave at the original frequency, traveling to the right like its components, whose amplitude is proportional to the cosine of φ / 2 {\displaystyle \varphi /2} .
A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle. Interference is essentially an energy redistribution process. The energy which is lost at the destructive interference is regained at the constructive interference. One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the x-axis. The phase difference at the point A is given by
It can be seen that the two waves are in phase when
and are half a cycle out of phase when
Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is
and df is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle θ.
The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout.
A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right.
When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar.
Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time.
It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases.
It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as A e i φ n {\displaystyle Ae^{i\varphi _{n}}} for N {\displaystyle N} waves from n = 0 {\displaystyle n=0} to n = N − 1 {\displaystyle n=N-1} , where
To show that
one merely assumes the converse, then multiplies both sides by e i 2 π N . {\displaystyle e^{i{\frac {2\pi }{N}}}.}
The Fabry–Pérot interferometer uses interference between multiple reflections.
A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion.
Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include:
Because the frequency of light waves (~10 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point r is:
where A represents the magnitude of the displacement, φ represents the phase and ω represents the angular frequency.
The displacement of the summed waves is
The intensity of the light at r is given by
This can be expressed in terms of the intensities of the individual waves as
Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2π. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity.
Classically the two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state.
Quantum mechanically the theories of Paul Dirac and Richard Feynman offer a more modern approach. Dirac showed that every quanta or photon of light acts on its own which he famously stated as "every photon interferes with itself". Richard Feynman showed that by evaluating a path integral where all possible paths are considered, that a number of higher probability paths will emerge. In thin films for example, film thickness which is not a multiple of light wavelength will not allow the quanta to traverse, only reflection is possible.
The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration (but shorter than their coherence time), will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap.
Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications.
A laser beam generally approximates much more closely to a monochromatic source, and thus it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.
Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements. This has also been observed for widefield interference between two incoherent laser sources.
It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified.
To generate interference fringes, light from the source has to be divided into two waves which then have to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems.
In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems.
In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror.
Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively.
Quantum interference – the observed wave-behavior of matter – resembles optical interference. Let Ψ ( x , t ) {\displaystyle \Psi (x,t)} be a wavefunction solution of the Schrödinger equation for a quantum mechanical object. Then the probability P ( x ) {\displaystyle P(x)} of observing the object at position x {\displaystyle x} is P ( x ) = | Ψ ( x , t ) | 2 = Ψ ∗ ( x , t ) Ψ ( x , t ) {\displaystyle P(x)=|\Psi (x,t)|^{2}=\Psi ^{*}(x,t)\Psi (x,t)} where * indicates complex conjugation. Quantum interference concerns the issue of this probability when the wavefunction is expressed as a sum or linear superposition of two terms Ψ ( x , t ) = Ψ A ( x , t ) + Ψ B ( x , t ) {\displaystyle \Psi (x,t)=\Psi _{A}(x,t)+\Psi _{B}(x,t)} :
Usually, Ψ A ( x , t ) {\displaystyle \Psi _{A}(x,t)} and Ψ B ( x , t ) {\displaystyle \Psi _{B}(x,t)} correspond to distinct situations A and B. When this is the case, the equation Ψ ( x , t ) = Ψ A ( x , t ) + Ψ B ( x , t ) {\displaystyle \Psi (x,t)=\Psi _{A}(x,t)+\Psi _{B}(x,t)} indicates that the object can be in situation A or situation B. The above equation can then be interpreted as: The probability of finding the object at x {\displaystyle x} is the probability of finding the object at x {\displaystyle x} when it is in situation A plus the probability of finding the object at x {\displaystyle x} when it is in situation B plus an extra term. This extra term, which is called the quantum interference term, is Ψ A ∗ ( x , t ) Ψ B ( x , t ) + Ψ A ( x , t ) Ψ B ∗ ( x , t ) {\displaystyle \Psi _{A}^{*}(x,t)\Psi _{B}(x,t)+\Psi _{A}(x,t)\Psi _{B}^{*}(x,t)} in the above equation. As in the classical wave case above, the quantum interference term can add (constructive interference) or subtract (destructive interference) from | Ψ A ( x , t ) | 2 + | Ψ B ( x , t ) | 2 {\displaystyle |\Psi _{A}(x,t)|^{2}+|\Psi _{B}(x,t)|^{2}} in the above equation depending on whether the quantum interference term is positive or negative. If this term is absent for all x {\displaystyle x} , then there is no quantum mechanical interference associated with situations A and B.
The best known example of quantum interference is the double-slit experiment. In this experiment, matter waves from electrons, atoms or molecules approach a barrier with two slits in it. One slit becomes Ψ A ( x , t ) {\displaystyle \Psi _{A}(x,t)} and the other becomes Ψ B ( x , t ) {\displaystyle \Psi _{B}(x,t)} . The interference pattern occurs on the far side, observed by detectors suitable to the particles originating the matter wave. The pattern matches the optical double slit pattern.
In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.
With tuning instruments that can produce sustained tones, beats can be readily recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. As the two tones gradually approach unison, the beating slows down and may become so slow as to be imperceptible. As the two tones get further apart, their beat frequency starts to approach the range of human pitch perception, the beating starts to sound like a note, and a combination tone is produced. This combination tone can also be referred to as a missing fundamental, as the beat frequency of any two tones is equivalent to the frequency of their implied fundamental frequency.
Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement.
Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light. In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment.
The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity.
Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement.
Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components.
In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas farthest apart in the array.
An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured. | [
{
"paragraph_id": 0,
"text": "In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word interference is derived from the Latin words inter which means \"between\" and fere which means \"hit or strike\", and was coined by Thomas Young in 1801.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference energy is stored in the elasticity of the medium. For example when we drop 2 pebbles in a pond we see a pattern but eventually waves continue and only when they reach the shore is the energy absorbed away from the medium.",
"title": "Mechanisms"
},
{
"paragraph_id": 3,
"text": "Constructive interference occurs when the phase difference between the waves is an even multiple of π (180°), whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values.",
"title": "Mechanisms"
},
{
"paragraph_id": 4,
"text": "Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.",
"title": "Mechanisms"
},
{
"paragraph_id": 5,
"text": "Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can for example in water. Superposition in the EM field is an assumed and necessary requirement, fundamentally 2 light beam pass through each other and continue on their respective paths. Light can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave–particle duality of light which is due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle however an explanation based on the Feynman path integral exists which takes into account quantum mechanical considerations.",
"title": "Mechanisms"
},
{
"paragraph_id": 6,
"text": "The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is",
"title": "Mechanisms"
},
{
"paragraph_id": 7,
"text": "where A {\\displaystyle A} is the peak amplitude, k = 2 π / λ {\\displaystyle k=2\\pi /\\lambda } is the wavenumber and ω = 2 π f {\\displaystyle \\omega =2\\pi f} is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right",
"title": "Mechanisms"
},
{
"paragraph_id": 8,
"text": "where φ {\\displaystyle \\varphi } is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is",
"title": "Mechanisms"
},
{
"paragraph_id": 9,
"text": "Using the trigonometric identity for the sum of two cosines: cos a + cos b = 2 cos ( a − b 2 ) cos ( a + b 2 ) , {\\textstyle \\cos a+\\cos b=2\\cos \\left({a-b \\over 2}\\right)\\cos \\left({a+b \\over 2}\\right),} this can be written",
"title": "Mechanisms"
},
{
"paragraph_id": 10,
"text": "This represents a wave at the original frequency, traveling to the right like its components, whose amplitude is proportional to the cosine of φ / 2 {\\displaystyle \\varphi /2} .",
"title": "Mechanisms"
},
{
"paragraph_id": 11,
"text": "A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle. Interference is essentially an energy redistribution process. The energy which is lost at the destructive interference is regained at the constructive interference. One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the x-axis. The phase difference at the point A is given by",
"title": "Mechanisms"
},
{
"paragraph_id": 12,
"text": "It can be seen that the two waves are in phase when",
"title": "Mechanisms"
},
{
"paragraph_id": 13,
"text": "and are half a cycle out of phase when",
"title": "Mechanisms"
},
{
"paragraph_id": 14,
"text": "Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is",
"title": "Mechanisms"
},
{
"paragraph_id": 15,
"text": "and df is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle θ.",
"title": "Mechanisms"
},
{
"paragraph_id": 16,
"text": "The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout.",
"title": "Mechanisms"
},
{
"paragraph_id": 17,
"text": "A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right.",
"title": "Mechanisms"
},
{
"paragraph_id": 18,
"text": "When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar.",
"title": "Mechanisms"
},
{
"paragraph_id": 19,
"text": "Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time.",
"title": "Mechanisms"
},
{
"paragraph_id": 20,
"text": "It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases.",
"title": "Mechanisms"
},
{
"paragraph_id": 21,
"text": "It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as A e i φ n {\\displaystyle Ae^{i\\varphi _{n}}} for N {\\displaystyle N} waves from n = 0 {\\displaystyle n=0} to n = N − 1 {\\displaystyle n=N-1} , where",
"title": "Mechanisms"
},
{
"paragraph_id": 22,
"text": "To show that",
"title": "Mechanisms"
},
{
"paragraph_id": 23,
"text": "one merely assumes the converse, then multiplies both sides by e i 2 π N . {\\displaystyle e^{i{\\frac {2\\pi }{N}}}.}",
"title": "Mechanisms"
},
{
"paragraph_id": 24,
"text": "The Fabry–Pérot interferometer uses interference between multiple reflections.",
"title": "Mechanisms"
},
{
"paragraph_id": 25,
"text": "A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion.",
"title": "Mechanisms"
},
{
"paragraph_id": 26,
"text": "Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include:",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 27,
"text": "Because the frequency of light waves (~10 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point r is:",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 28,
"text": "where A represents the magnitude of the displacement, φ represents the phase and ω represents the angular frequency.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 29,
"text": "The displacement of the summed waves is",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 30,
"text": "The intensity of the light at r is given by",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 31,
"text": "This can be expressed in terms of the intensities of the individual waves as",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 32,
"text": "Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2π. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 33,
"text": "Classically the two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 34,
"text": "Quantum mechanically the theories of Paul Dirac and Richard Feynman offer a more modern approach. Dirac showed that every quanta or photon of light acts on its own which he famously stated as \"every photon interferes with itself\". Richard Feynman showed that by evaluating a path integral where all possible paths are considered, that a number of higher probability paths will emerge. In thin films for example, film thickness which is not a multiple of light wavelength will not allow the quanta to traverse, only reflection is possible.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 35,
"text": "The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration (but shorter than their coherence time), will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 36,
"text": "Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 37,
"text": "A laser beam generally approximates much more closely to a monochromatic source, and thus it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 38,
"text": "Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements. This has also been observed for widefield interference between two incoherent laser sources.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 39,
"text": "It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 40,
"text": "To generate interference fringes, light from the source has to be divided into two waves which then have to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 41,
"text": "In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 42,
"text": "In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 43,
"text": "Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 44,
"text": "Quantum interference – the observed wave-behavior of matter – resembles optical interference. Let Ψ ( x , t ) {\\displaystyle \\Psi (x,t)} be a wavefunction solution of the Schrödinger equation for a quantum mechanical object. Then the probability P ( x ) {\\displaystyle P(x)} of observing the object at position x {\\displaystyle x} is P ( x ) = | Ψ ( x , t ) | 2 = Ψ ∗ ( x , t ) Ψ ( x , t ) {\\displaystyle P(x)=|\\Psi (x,t)|^{2}=\\Psi ^{*}(x,t)\\Psi (x,t)} where * indicates complex conjugation. Quantum interference concerns the issue of this probability when the wavefunction is expressed as a sum or linear superposition of two terms Ψ ( x , t ) = Ψ A ( x , t ) + Ψ B ( x , t ) {\\displaystyle \\Psi (x,t)=\\Psi _{A}(x,t)+\\Psi _{B}(x,t)} :",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 45,
"text": "Usually, Ψ A ( x , t ) {\\displaystyle \\Psi _{A}(x,t)} and Ψ B ( x , t ) {\\displaystyle \\Psi _{B}(x,t)} correspond to distinct situations A and B. When this is the case, the equation Ψ ( x , t ) = Ψ A ( x , t ) + Ψ B ( x , t ) {\\displaystyle \\Psi (x,t)=\\Psi _{A}(x,t)+\\Psi _{B}(x,t)} indicates that the object can be in situation A or situation B. The above equation can then be interpreted as: The probability of finding the object at x {\\displaystyle x} is the probability of finding the object at x {\\displaystyle x} when it is in situation A plus the probability of finding the object at x {\\displaystyle x} when it is in situation B plus an extra term. This extra term, which is called the quantum interference term, is Ψ A ∗ ( x , t ) Ψ B ( x , t ) + Ψ A ( x , t ) Ψ B ∗ ( x , t ) {\\displaystyle \\Psi _{A}^{*}(x,t)\\Psi _{B}(x,t)+\\Psi _{A}(x,t)\\Psi _{B}^{*}(x,t)} in the above equation. As in the classical wave case above, the quantum interference term can add (constructive interference) or subtract (destructive interference) from | Ψ A ( x , t ) | 2 + | Ψ B ( x , t ) | 2 {\\displaystyle |\\Psi _{A}(x,t)|^{2}+|\\Psi _{B}(x,t)|^{2}} in the above equation depending on whether the quantum interference term is positive or negative. If this term is absent for all x {\\displaystyle x} , then there is no quantum mechanical interference associated with situations A and B.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 46,
"text": "The best known example of quantum interference is the double-slit experiment. In this experiment, matter waves from electrons, atoms or molecules approach a barrier with two slits in it. One slit becomes Ψ A ( x , t ) {\\displaystyle \\Psi _{A}(x,t)} and the other becomes Ψ B ( x , t ) {\\displaystyle \\Psi _{B}(x,t)} . The interference pattern occurs on the far side, observed by detectors suitable to the particles originating the matter wave. The pattern matches the optical double slit pattern.",
"title": "Complex valued wave functions"
},
{
"paragraph_id": 47,
"text": "In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.",
"title": "Applications"
},
{
"paragraph_id": 48,
"text": "With tuning instruments that can produce sustained tones, beats can be readily recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. As the two tones gradually approach unison, the beating slows down and may become so slow as to be imperceptible. As the two tones get further apart, their beat frequency starts to approach the range of human pitch perception, the beating starts to sound like a note, and a combination tone is produced. This combination tone can also be referred to as a missing fundamental, as the beat frequency of any two tones is equivalent to the frequency of their implied fundamental frequency.",
"title": "Applications"
},
{
"paragraph_id": 49,
"text": "Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement.",
"title": "Applications"
},
{
"paragraph_id": 50,
"text": "Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light. In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment.",
"title": "Applications"
},
{
"paragraph_id": 51,
"text": "The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity.",
"title": "Applications"
},
{
"paragraph_id": 52,
"text": "Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement.",
"title": "Applications"
},
{
"paragraph_id": 53,
"text": "Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components.",
"title": "Applications"
},
{
"paragraph_id": 54,
"text": "In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas farthest apart in the array.",
"title": "Applications"
},
{
"paragraph_id": 55,
"text": "An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured.",
"title": "Applications"
},
{
"paragraph_id": 56,
"text": "",
"title": "Applications"
}
]
| In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity or lower amplitude if the two waves are in phase or out of phase, respectively.
Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves. | 2001-10-08T15:28:47Z | 2023-12-28T18:59:25Z | [
"Template:Short description",
"Template:Pi",
"Template:Reflist",
"Template:Commons",
"Template:For",
"Template:Math",
"Template:Cite book",
"Template:Cols",
"Template:Wiktionary",
"Template:Quantum mechanics topics",
"Template:Cite journal",
"Template:Redirect",
"Template:Technical",
"Template:See also",
"Template:Quantum mechanics",
"Template:Main",
"Template:Colend"
]
| https://en.wikipedia.org/wiki/Wave_interference |
15,114 | Indictable offence | In many common law jurisdictions (e.g. England and Wales, Ireland, Canada, Hong Kong, India, Australia, New Zealand, Malaysia, Singapore), an indictable offence is an offence which can only be tried on an indictment after a preliminary hearing to determine whether there is a prima facie case to answer or by a grand jury (in contrast to a summary offence). A similar concept in the United States is known as a felony, which for federal crimes, also requires an indictment. In Scotland, which is a hybrid common law jurisdiction, the procurator fiscal will commence solemn proceedings for serious crimes to be prosecuted on indictment before a jury.
In Australia, an indictable offence is more serious than a summary offence, and one where the defendant has the right to trial by jury. They include crimes such as murder, rape, and threatening or endangering life. The system is underpinned by various state and territory acts and the Commonwealth Crimes Act 1914.
In South Australia, New South Wales, and Queensland, indictable offences are further split into two categories: major indictable offences (including murder, rape, and threatening or endangering life) are heard in the state's Supreme Court, while minor indictable offences are heard in the District Court. In South Australia, minor indictable offences are generally heard in magistrates courts, although the defendant may elect to be heard in the District Court.
In Canada, an indictable offence is a crime that is more serious than a summary offence. Examples of indictable offences include theft over $5,000, breaking and entering, aggravated sexual assault, and murder. Maximum penalties for indictable offences are different depending on the crime and can include life in prison. There are minimum penalties for some indictable offences.
In relation to England and Wales, the expression indictable offence means an offence which, if committed by an adult, is triable on indictment, whether it is exclusively so triable or triable either way; and the term indictable, in its application to offences, is to be construed accordingly. In this definition, references to the way or ways in which an offence is triable are to be construed without regard to the effect, if any, of section 22 of the Magistrates' Courts Act 1980 on the mode of trial in a particular case.
An either-way offence allows the defendant to elect between trial by jury on indictment in the Crown Court and summary trial in a magistrates' court. However, the election may be overruled by the magistrates' court if the facts suggest that the sentencing powers of a magistrates' court would be inadequate to reflect the seriousness of the offence.
In relation to some indictable offences, for example criminal damage, only summary trial is available unless the damage caused exceeds £5,000.
A youth court has jurisdiction to try all indictable offences with the exception of homicide and certain firearms offences, and will normally do so provided that the available sentencing power of two years' detention is adequate to punish the offender if found guilty.
See section 64 of the Criminal Law Act 1977.
Grand juries were abolished in 1933.
Some offences such as murder and rape are considered so serious that they can only be tried on indictment at the Crown Court where the widest range of sentencing powers is available to the judge.
The expression indictable-only offence was defined by section 51 of the Crime and Disorder Act 1998, as originally enacted, as an offence triable only on indictment. Sections 51 and 52 of, and Schedule 3 to, that Act abolished committal proceedings for such offences and made other provisions in relation to them.
When the accused is charged with an indictable-only offence, he or she will be tried in the Crown Court. The rules are different in England and Wales in respect of those under 18 years of age.
See also section 14(a) of the Criminal Law Act 1977.
Similarly in New Zealand, a rape or murder charge will be tried at the High Court, while less serious offences such as theft will be tried at the District Court. However, the District Court can hold both jury and summary trials.
In the United States, federal felonies always require an indictment from a grand jury before proceeding to trial. In contrast, while misdemeanours may proceed to trial on indictment; this is not required, as they may also proceed on information or complaint. Different states have different policies; since the requirement of an indictment by grand jury is not incorporated against the states, in many states, an indictment is not required for a felony case to proceed. | [
{
"paragraph_id": 0,
"text": "In many common law jurisdictions (e.g. England and Wales, Ireland, Canada, Hong Kong, India, Australia, New Zealand, Malaysia, Singapore), an indictable offence is an offence which can only be tried on an indictment after a preliminary hearing to determine whether there is a prima facie case to answer or by a grand jury (in contrast to a summary offence). A similar concept in the United States is known as a felony, which for federal crimes, also requires an indictment. In Scotland, which is a hybrid common law jurisdiction, the procurator fiscal will commence solemn proceedings for serious crimes to be prosecuted on indictment before a jury.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In Australia, an indictable offence is more serious than a summary offence, and one where the defendant has the right to trial by jury. They include crimes such as murder, rape, and threatening or endangering life. The system is underpinned by various state and territory acts and the Commonwealth Crimes Act 1914.",
"title": "Australia"
},
{
"paragraph_id": 2,
"text": "In South Australia, New South Wales, and Queensland, indictable offences are further split into two categories: major indictable offences (including murder, rape, and threatening or endangering life) are heard in the state's Supreme Court, while minor indictable offences are heard in the District Court. In South Australia, minor indictable offences are generally heard in magistrates courts, although the defendant may elect to be heard in the District Court.",
"title": "Australia"
},
{
"paragraph_id": 3,
"text": "In Canada, an indictable offence is a crime that is more serious than a summary offence. Examples of indictable offences include theft over $5,000, breaking and entering, aggravated sexual assault, and murder. Maximum penalties for indictable offences are different depending on the crime and can include life in prison. There are minimum penalties for some indictable offences.",
"title": "Canada"
},
{
"paragraph_id": 4,
"text": "In relation to England and Wales, the expression indictable offence means an offence which, if committed by an adult, is triable on indictment, whether it is exclusively so triable or triable either way; and the term indictable, in its application to offences, is to be construed accordingly. In this definition, references to the way or ways in which an offence is triable are to be construed without regard to the effect, if any, of section 22 of the Magistrates' Courts Act 1980 on the mode of trial in a particular case.",
"title": "England and Wales"
},
{
"paragraph_id": 5,
"text": "An either-way offence allows the defendant to elect between trial by jury on indictment in the Crown Court and summary trial in a magistrates' court. However, the election may be overruled by the magistrates' court if the facts suggest that the sentencing powers of a magistrates' court would be inadequate to reflect the seriousness of the offence.",
"title": "England and Wales"
},
{
"paragraph_id": 6,
"text": "In relation to some indictable offences, for example criminal damage, only summary trial is available unless the damage caused exceeds £5,000.",
"title": "England and Wales"
},
{
"paragraph_id": 7,
"text": "A youth court has jurisdiction to try all indictable offences with the exception of homicide and certain firearms offences, and will normally do so provided that the available sentencing power of two years' detention is adequate to punish the offender if found guilty.",
"title": "England and Wales"
},
{
"paragraph_id": 8,
"text": "See section 64 of the Criminal Law Act 1977.",
"title": "England and Wales"
},
{
"paragraph_id": 9,
"text": "Grand juries were abolished in 1933.",
"title": "England and Wales"
},
{
"paragraph_id": 10,
"text": "Some offences such as murder and rape are considered so serious that they can only be tried on indictment at the Crown Court where the widest range of sentencing powers is available to the judge.",
"title": "England and Wales"
},
{
"paragraph_id": 11,
"text": "The expression indictable-only offence was defined by section 51 of the Crime and Disorder Act 1998, as originally enacted, as an offence triable only on indictment. Sections 51 and 52 of, and Schedule 3 to, that Act abolished committal proceedings for such offences and made other provisions in relation to them.",
"title": "England and Wales"
},
{
"paragraph_id": 12,
"text": "When the accused is charged with an indictable-only offence, he or she will be tried in the Crown Court. The rules are different in England and Wales in respect of those under 18 years of age.",
"title": "England and Wales"
},
{
"paragraph_id": 13,
"text": "See also section 14(a) of the Criminal Law Act 1977.",
"title": "England and Wales"
},
{
"paragraph_id": 14,
"text": "Similarly in New Zealand, a rape or murder charge will be tried at the High Court, while less serious offences such as theft will be tried at the District Court. However, the District Court can hold both jury and summary trials.",
"title": "New Zealand"
},
{
"paragraph_id": 15,
"text": "In the United States, federal felonies always require an indictment from a grand jury before proceeding to trial. In contrast, while misdemeanours may proceed to trial on indictment; this is not required, as they may also proceed on information or complaint. Different states have different policies; since the requirement of an indictment by grand jury is not incorporated against the states, in many states, an indictment is not required for a felony case to proceed.",
"title": "United States"
}
]
| In many common law jurisdictions, an indictable offence is an offence which can only be tried on an indictment after a preliminary hearing to determine whether there is a prima facie case to answer or by a grand jury. A similar concept in the United States is known as a felony, which for federal crimes, also requires an indictment. In Scotland, which is a hybrid common law jurisdiction, the procurator fiscal will commence solemn proceedings for serious crimes to be prosecuted on indictment before a jury. | 2002-02-25T15:51:15Z | 2023-10-11T09:04:33Z | [
"Template:Reflist",
"Template:Cite web",
"Template:English criminal law navbox",
"Template:Types of crime",
"Template:Short description",
"Template:Refimprove",
"Template:Criminal law",
"Template:Multiple image"
]
| https://en.wikipedia.org/wiki/Indictable_offence |
15,116 | Inter Milan | Football Club Internazionale Milano, commonly referred to as Internazionale (pronounced [ˌinternattsjoˈnaːle]) or simply Inter, and colloquially known as Inter Milan in English-speaking countries, is an Italian professional football club based in Milan, Lombardy. Inter is the only Italian side to have always competed in the top flight of Italian football since its debut in 1909.
Founded in 1908 following a schism within the Milan Cricket and Football Club (now AC Milan), Inter won its first championship in 1910. Since its formation, the club has won 35 domestic trophies, including 19 league titles, 9 Coppa Italia, and 7 Supercoppa Italiana. From 2006 to 2010, the club won five successive league titles, equalling the all-time record at that time. They have won the European Cup/Champions League three times: two back-to-back in 1964 and 1965, and then another in 2010. Their latest win completed an unprecedented Italian seasonal treble, with Inter winning the Coppa Italia and the Scudetto the same year. The club has also won three UEFA Cups, two Intercontinental Cups and one FIFA Club World Cup.
Inter's home games are played at the San Siro stadium, which they share with city rivals AC Milan. The stadium is the largest in Italian football with a capacity of 75,817. They have long-standing rivalries with Milan, with whom they contest the Derby della Madonnina, and Juventus, with whom they contest the Derby d'Italia; their rivalry with the former is one of the most followed derbies in football. As of 2019, Inter has the highest home game attendance in Italy and the sixth highest attendance in Europe. Since 2016, the club has been majority-owned by Chinese holding company Suning Holdings Group. Inter is one of the most valuable clubs in Italian and world football.
The club was founded on 9 March 1908 as Football Club Internazionale, following the schism with the Milan Cricket and Football Club (now AC Milan). The name of the club derives from the wish of its founding members to accept foreign players without limits as well as Italians.
The club won its first championship in 1910 and its second in 1920. The captain and coach of the first championship winning team was Virgilio Fossati, who was later killed in battle while serving in the Italian army during World War I. In 1922, Inter was at risk of relegation to the second division, but they remained in the top league after winning two play-offs.
Six years later, during the Fascist era, the club was forced to merge with the Unione Sportiva Milanese and was renamed Società Sportiva Ambrosiana. During the 1928–29 season, the team wore white jerseys with a red cross emblazoned on it; the jersey's design was inspired by the flag and coat of arms of the city of Milan. In 1929, the new club chairman Oreste Simonotti changed the club's name to Associazione Sportiva Ambrosiana and restored the previous black-and-blue jerseys, however supporters continued to call the team Inter, and in 1931 new chairman Pozzani caved in to shareholder pressure and changed the name to Associazione Sportiva Ambrosiana-Inter.
Their first Coppa Italia (Italian Cup) was won in 1938–39, led by the iconic Giuseppe Meazza, after whom the San Siro stadium is officially named. A fifth championship followed in 1940, despite Meazza incurring an injury. After the end of World War II the club regained its original name, winning its sixth championship in 1953 and its seventh in 1954.
In 1960, manager Helenio Herrera joined Inter from Barcelona, bringing with him his midfield general Luis Suárez, who won the European Footballer of the Year in the same year for his role in Barcelona's La Liga/Fairs Cup double. He would transform Inter into one of the greatest teams in Europe. He modified a 5–3–2 tactic known as the "Verrou" ("door bolt") which created greater flexibility for counterattacks. The catenaccio system was invented by an Austrian coach, Karl Rappan. Rappan's original system was implemented with four fixed defenders, playing a strict man-to-man marking system, plus a playmaker in the middle of the field who plays the ball together with two midfield wings. Herrera would modify it by adding a fifth defender, the sweeper or libero behind the two centre backs. The sweeper or libero who acted as the free man would deal with any attackers who went through the two centre backs. Inter finished third in the Serie A in his first season, second the next year and first in his third season. Then followed a back-to-back European Cup victory in 1964 and 1965, earning him the title "il Mago" ("the Wizard"). The core of Herrera's team were the attacking fullbacks Tarcisio Burgnich and Giacinto Facchetti, Armando Picchi the sweeper, Suárez the playmaker, Jair the winger, Mario Corso the left midfielder, and Sandro Mazzola, who played on the inside-right.
In 1964, Inter reached the European Cup Final by beating Borussia Dortmund in the semi-final and Partizan in the quarter-final. In the final, they met Real Madrid, a team that had reached seven out of the nine finals to date. Mazzola scored two goals in a 3–1 victory, and then the team won the Intercontinental Cup against Independiente.
A year later, Inter repeated the feat by beating two-time winner Benfica in the final held at home, from a Jair goal after have defeated Liverpool in semifinals recovery from a 3–1 with a 3–0, and then again beat Independiente in the Intercontinental Cup becoming the first European team to win two times in a row the competition. Inter in 1965 came close to win Treble for the first time in European football history after have won also Serie A title but lost 1965 Coppa Italia final played on 29 August 1965.
Inter reached again semifinals in 1966 but this time lost against Real Madrid that later will win the tournament.
In 1967 after Inter have eliminated Real Madrid in quarterfinals, with Suárez injured, Inter lost the European Cup Final on Lisbon 2–1 to Celtic. During that year the club changed its name to Football Club Internazionale Milano.
Following the golden era of the 1960s, Inter managed to win their eleventh league title in 1971 and their twelfth in 1980. Inter were defeated for the second time in five years in the final of the European Cup, going down 0–2 to Johan Cruyff's Ajax in 1972. During the 1970s and the 1980s, Inter also added two to its Coppa Italia tally, in 1977–78 and 1981–82.
Hansi Müller (1975–1982 VfB Stuttgart, 1982–1984 Inter Milan) and Karl-Heinz Rummenigge (1974–1984 Bayern Munich, 1984–1987 Inter Milan) played for Inter Milan. Led by the German duo of Andreas Brehme and Lothar Matthäus, and Argentine Ramón Díaz, Inter captured the 1989 Serie A championship. Inter were unable to defend their title despite adding fellow German Jürgen Klinsmann to the squad and winning their first Supercoppa Italiana at the start of the season.
The 1990s was a period of disappointment. While their great rivals Milan and Juventus were achieving success both domestically and in Europe, Inter were left behind, with repeated mediocre results in the domestic league standings, their worst coming in 1993–94 when they finished just one point out of the relegation zone. Nevertheless, they achieved some European success with three UEFA Cup victories in 1991, 1994 and 1998.
With Massimo Moratti's takeover from Ernesto Pellegrini in 1995, Inter twice broke the world record transfer fee in this period (£19.5 million for Ronaldo from Barcelona in 1997 and £31 million for Christian Vieri from Lazio two years later). However, the 1990s remained the only decade in Inter's history, alongside the 1940s, in which they did not win a single Serie A championship. For Inter fans, it was difficult to find who in particular was to blame for the troubled times and this led to some icy relations between them and the chairman, the managers and even some individual players.
Moratti later became a target of the fans, especially when he sacked the much-loved coach Luigi Simoni after only a few games into the 1998–99 season, having just received the Italian manager of the year award for 1998 the day before being dismissed. That season, Inter failed to qualify for any European competition for the first time in almost ten years, finishing in eighth place.
The following season, Moratti appointed former Juventus manager Marcello Lippi, and signed players such as Angelo Peruzzi and Laurent Blanc together with other former Juventus players Vieri and Vladimir Jugović. The team came close to their first domestic success since 1989 when they reached the Coppa Italia final only to be defeated by Lazio.
Inter's misfortunes continued the following season, losing the 2000 Supercoppa Italiana match against Lazio 4–3 after initially taking the lead through new signing Robbie Keane. They were also eliminated in the preliminary round of the Champions League by Swedish club Helsingborgs IF, with Álvaro Recoba missing a crucial late penalty. Lippi was sacked after only a single game of the new season following Inter's first ever Serie A defeat to Reggina. Marco Tardelli, chosen to replace Lippi, failed to improve results, and is remembered by Inter fans as the manager that lost 6–0 in the city derby against Milan. Other members of the Inter "family" during this period that suffered were the likes of Vieri and Fabio Cannavaro, both of whom had their restaurants in Milan vandalised after defeats to the Rossoneri.
In 2002, not only did Inter manage to make it to the UEFA Cup semi-finals, but were also only 45 minutes away from capturing the Scudetto when they needed to maintain their one-goal advantage away to Lazio. Inter were 2–1 up after only 24 minutes. Lazio equalised during first half injury time and then scored two more goals in the second half to clinch victory that eventually saw Juventus win the championship. The next season, Inter finished as league runners-up and also managed to make it to the 2002–03 Champions League semi-finals against Milan, losing on the away goals rule.
On 8 July 2004, Inter appointed former Lazio coach Roberto Mancini as its new head coach. In his first season, the team collected 72 points from 18 wins, 18 draws and only two losses, as well as winning the Coppa Italia and later the Supercoppa Italiana. On 11 May 2006, Inter retained their Coppa Italia title once again after defeating Roma with a 4–1 aggregate victory (a 1–1 scoreline in Rome and a 3–1 win at the San Siro).
Inter were awarded the 2005–06 Serie A championship retrospectively after title-winning Juventus was relegated and points were stripped from Milan due to the Calciopoli scandal. During the following season, Inter went on a record-breaking run of 17 consecutive victories in Serie A, starting on 25 September 2006 with a 4–1 home victory over Livorno, and ending on 28 February 2007, after a 1–1 draw at home to Udinese. On 22 April 2007, Inter won their second consecutive Scudetto—and first on the field since 1989—when they defeated Siena 2–1 at Stadio Artemio Franchi. Italian World Cup-winning defender Marco Materazzi scored both goals.
Inter started the 2007–08 season with the goal of winning both Serie A and Champions League. The team started well in the league, topping the table from the first round of matches, and also managed to qualify for the Champions League knockout stage. However, a late collapse, leading to a 2–0 defeat with ten men away to Liverpool on 19 February in the Champions League, threw into question manager Roberto Mancini's future at Inter while domestic form took a sharp turn of fortune with the team failing to win in the three following Serie A games. After being eliminated by Liverpool in the Champions League, Mancini announced his intention to leave his job immediately only to change his mind the following day. On the final day of the 2007–08 Serie A season, Inter played Parma away, and two goals from Zlatan Ibrahimović sealed their third consecutive championship. Mancini, however, was sacked soon after due to his previous announcement to leave the club.
On 2 June 2008, Inter appointed former Porto and Chelsea boss José Mourinho as new head coach. In his first season, the Nerazzurri won a Suppercoppa Italiana and a fourth consecutive title, though falling in the Champions League in the first knockout round for a third-straight year, losing to eventual finalist Manchester United. In winning the league title Inter became the first club in the last 60 years to win the title for the fourth consecutive time and joined Torino and Juventus as the only clubs to accomplish this feat, as well as being the first club based outside Turin.
Inter won the 2009–10 Champions League, defeating reigning champions Barcelona in the semi-final before beating Bayern Munich 2–0 in the final with two goals from Diego Milito. Inter also won the 2009–10 Serie A title by two points over Roma, and the 2010 Coppa Italia by defeating the same side 1–0 in the final. This made Inter the first Italian team to win the treble. At the end of the season, Mourinho left the club to manage Real Madrid; he was replaced by Rafael Benítez.
On 21 August 2010, Inter defeated Roma 3–1 and won the 2010 Supercoppa Italiana, their fourth trophy of the year. In December 2010, they claimed the FIFA Club World Cup for the first time after a 3–0 win against TP Mazembe in the final. However, after this win, on 23 December 2010, due to their declining performance in Serie A, the team fired Benítez. He was replaced by Leonardo the following day.
Leonardo started with 30 points from 12 games, with an average of 2.5 points per game, better than his predecessors Benítez and Mourinho. On 6 March 2011, Leonardo set a new Italian Serie A record by collecting 33 points in 13 games; the previous record was 32 points in 13 games made by Fabio Capello in the 2004–05 season. Leonardo led the club to the quarter-finals of the Champions League before losing to Schalke 04, and lead them to Coppa Italia title. At the end of the season, however, he resigned and was followed by new managers Gian Piero Gasperini, Claudio Ranieri and Andrea Stramaccioni, all hired during the following season.
On 1 August 2012, the club announced that Moratti was to sell a minority interest of the club to a Chinese consortium led by Kenneth Huang. On the same day, Inter announced an agreement was formed with China Railway Construction Corporation Limited for a new stadium project, however, the deal with the Chinese eventually collapsed. The 2012–13 season was the worst in recent club history with Inter finishing ninth in Serie A and failing to qualify for any European competitions. Walter Mazzarri was appointed to replace Stramaccioni as the manager for 2013–14 season on 24 May 2013, having ended his tenure at Napoli. He guided the club to fifth in Serie A and to 2014–15 UEFA Europa League qualification.
On 15 October 2013, an Indonesian consortium (International Sports Capital HK Ltd.) led by Erick Thohir, Handy Soetedjo and Rosan Roeslani, signed an agreement to acquire 70% of Inter shares from Internazionale Holding S.r.l. Immediately after the deal, Moratti's Internazionale Holding S.r.l. still retained 29.5% of the shares of FC Internazionale Milano S.p.A. After the deal, the shares of Inter was owned by a chain of holding companies, namely International Sports Capital S.p.A. of Italy (for 70% stake), International Sports Capital HK Limited and Asian Sports Ventures HK Limited of Hong Kong. Asian Sports Ventures HK Limited, itself another intermediate holding company, was owned by Nusantara Sports Ventures HK Limited (60% stake, a company owned by Thohir), Alke Sports Investment HK Limited (20% stake) and Aksis Sports Capital HK Limited (20% stake).
Thohir, who also co-owned Major League Soccer (MLS) club D.C. United and Indonesia Super League (ISL) club Persib Bandung, announced on 2 December 2013 that Inter and D.C. United had formed a strategic partnership. During the Thohir era the club began to modify its financial structure from one reliant on continual owner investment to a more self sustain business model although the club still breached UEFA Financial Fair Play Regulations in 2015. The club was fined and received squad reduction in UEFA competitions, with additional penalties suspended in the probation period. During this time, Roberto Mancini returned as the club manager on 14 November 2014, with Inter finishing eighth. Inter finished 2015–2016 season fourth, failing to return to Champions League.
On 6 June 2016, Suning Holdings Group (via a Luxembourg-based subsidiary Great Horizon S.á r.l.) a company owned by Zhang Jindong, co-founder and chairman of Suning Commerce Group, acquired a majority stake of Inter from Thohir's consortium International Sports Capital S.p.A. and from Moratti family's remaining shares in Internazionale Holding S.r.l. According to various filings, the total investment from Suning was €270 million. The deal was approved by an extraordinary general meeting on 28 June 2016, from which Suning Holdings Group had acquired a 68.55% stake in the club.
The first season of new ownership, however, started with poor performance in pre-season friendlies. On 8 August 2016, Inter parted company with head coach Roberto Mancini by mutual consent over disagreements regarding the club's direction. He was replaced by Frank de Boer who was sacked on 1 November 2016 after leading Inter to a 4W–2D–5L record in 11 Serie A games as head coach. The successor, Stefano Pioli, didn't save the team from getting the worst group result in UEFA competitions in the club's history. Despite an eight-game winning streak, he and the club parted away before season's end when it became clear they would finish outside the league's top three for the sixth consecutive season. On 9 June 2017, former Roma coach Luciano Spalletti was appointed as Inter manager, signing a two-year contract, and eleven months later Inter clinched a UEFA Champions League group stage spot after going six years without Champions League participation thanks to a 3–2 victory against Lazio in the final game of 2017–18 Serie A. Due to this success, in August the club extended the contract with Spalletti to 2021.
On 26 October 2018, Steven Zhang was appointed as new president of the club. On 25 January 2019, the club officially announced that LionRock Capital from Hong Kong reached an agreement with International Sports Capital HK Limited, in order to acquire its 31.05% shares in Inter and to become the club's new minority shareholder. After the 2018–19 Serie A season, despite Inter finishing fourth, Spalletti was sacked. In May 2021, American investment firm Oaktree Capital loaned Inter $336 million to cover losses incurred during the COVID-19 pandemic.
On 31 May 2019, Inter appointed former Juventus and Italian manager Antonio Conte as their new coach, signing a three-year deal. In September 2019, Steven Zhang was elected to the board of the European Club Association. In the 2019–20 Serie A, Inter Milan finished as runner-up as they won 2–0 against Atalanta on the last matchday. They also reached the 2020 UEFA Europa League Final, ultimately losing 3–2 to Sevilla. Following Atalanta's draw against Sassuolo on 2 May 2021, Internazionale were confirmed as champions for the first time in eleven years, ending Juventus' run of nine consecutive titles. However, despite securing Serie A glory, Conte left the club by mutual consent on 26 May 2021. The departure was reportedly due to disagreements between Conte and the board over player transfers. In June 2021, Simone Inzaghi was appointed as Conte's replacement. On 8 August 2021, Romelu Lukaku was sold to Chelsea F.C. for €115 million, representing the most expensive association football transfer by an Italian football club ever.
On 12 January 2022, Inter won the Supercoppa Italiana, defeating Juventus 2–1 at San Siro. After conceding a goal to the opponent, Inter equalised with a penalty scored by Lautaro Martínez, and the match finished 1–1 in regulation time. In the last second of the extra-time, Alexis Sánchez scored the winning goal following a defensive error, giving Inter the first trophy of the season, also Simone Inzaghi's first trophy as Inter manager. On 11 May 2022, Inter won the Coppa Italia defeating Juventus 4–2 at Stadio Olimpico. After normal time had ended 2–2, with Nicolò Barella and Hakan Çalhanoğlu scoring Inter's goals, Ivan Perišić's brace in the extra-time gave Inter the win and the second title of the season. The 2021–22 Serie A campaign saw Inter finish in second place, being the most prolific attacking side with 84 goals. On 18 January 2023, Inter won the Supercoppa Italiana, defeating Milan 3−0 at King Fahd International Stadium, thanks to goals from Federico Dimarco, Edin Džeko, and Lautaro Martinez.
On 16 May 2023, Inter won against Milan in the semi-finals of 2022–23 UEFA Champions League and qualified for the final, the first time they have reached the final in the UEFA Champions League since 2010. However, they were defeated at the Atatürk Olympic Stadium 1−0 by Manchester City after a second half goal from Rodri.
One of the founders of Inter, a painter named Giorgio Muggiani, was responsible for the design of the first Inter logo in 1908. The first design incorporated the letters "FCIM" in the centre of a series of circles that formed the badge of the club. The basic elements of the design have remained constant even as finer details have been modified over the years. Starting at the 1999–2000 season, the original club crest was reduced in size, to give place for the addition of the club's name and foundation year at the upper and lower part of the logo respectively.
In 2007, the logo was returned to the pre-1999–2000 era. It was given a more modern look with a smaller Scudetto star and lighter color scheme. This version was used until July 2014, when the club decided to undertake a rebranding. The most significant difference between the current and the previous logo is the omission of the star from other media except match kits.
Since its founding in 1908, Inter have almost always worn black and blue stripes, earning them the nickname Nerazzurri. According to the tradition, the colours were adopted to represent the nocturnal sky: in fact, the club was established on the night of 9 March, at 23:30; moreover, blue was chosen by Giorgio Muggiani because he considered it to be the opposite colour to red, worn by the Milan Cricket and Football Club rivals.
During the 1928–29 season, however, Inter were forced to abandon their black and blue uniforms. In 1928, Inter's name and philosophy made the ruling Fascist Party uneasy; as a result, during the same year the 20-year-old club was merged with Unione Sportiva Milanese: the new club was named Società Sportiva Ambrosiana after the patron saint of Milan. The flag of Milan (the red cross on white background) replaced the traditional black and blue. In 1929 the black-and-blue jerseys were restored, and after World War II, when the Fascists had fallen from power, the club reverted to their original name. In 2008, Inter celebrated their centenary with a red cross on their away shirt. The cross is reminiscent of the flag of their city, and they continue to use the pattern on their third kit. In 2014, the club adopted a predominantly black home kit with thin blue pinstripes before returning to a more traditional design the following season.
Animals are often used to represent football clubs in Italy – the grass snake, called Biscione, represents Inter. The snake is an important symbol for the city of Milan, appearing often in Milanese heraldry as a coiled viper with a man in its jaws. The symbol is present on the coat of arms of the House of Sforza (which ruled over Italy from Milan during the Renaissance period), the city of Milan, the historical Duchy of Milan (a 400-year state of the Holy Roman Empire) and Insubria (a historical region the city of Milan falls within). For the 2010–11 season, Inter's away kit featured the serpent.
The team's stadium is the 75,923 seat San Siro, officially known as the Stadio Giuseppe Meazza after the former player who represented both Milan and Inter. The more commonly used name, San Siro, is the name of the district where it is located. San Siro has been the home of Milan since 1926, when it was privately built by funding from Milan's chairman at the time, Piero Pirelli. Construction was performed by 120 workers, and took 13+1⁄2 months to complete. The stadium was owned by the club until it was sold to the city in 1935, and since 1947 it has been shared with Inter, when they were accepted as joint tenant.
The first game played at the stadium was on 19 September 1926, when Inter beat Milan 6–3 in a friendly match. Milan played its first league game in San Siro on 19 September 1926, losing 1–2 to Sampierdarenese. From an initial capacity of 35,000 spectators, the stadium has undergone several major renovations. A major structural renovation was made for the 2016 UEFA Champions League Final while another one took place in late 2021 to host the UEFA Nations League final. The stadium is going to be refurbished again in time for Milano Cortina 2026.
Based on the English model for stadiums, San Siro is specifically designed for football matches, as opposed to many multi-purpose stadiums used in Serie A. It is therefore renowned in Italy for its fantastic atmosphere during matches owing to the closeness of the stands to the pitch.
Since 2012, various proposals and projects by Massimo Moratti have alternated regarding a possible construction of a new Inter stadium. Between June and July 2019, Inter and Milan announced the agreement for the construction of a new shared stadium in the San Siro area. In the winter of 2021, Giuseppe Sala, the mayor of Milan, gave the official permission for the construction of the new stadium next to San Siro that will be partially demolished and refunctionalised after the 2026 Olympic Games. In early 2022, Inter and Milan revealed a "plan B" to relocate the construction of the new Milano stadium in the Greater Milan, away from San Siro area.
Inter is one of the most supported clubs in Italy, according to an August 2007 research by Italian newspaper La Repubblica. In the early years (until the First World War), Inter fans from the city of Milan were typically middle class, while Milan fans were typically working class. During Massimo Moratti ownership Inter fans were viewed in a moderate left-political eye. At the same time during Silvio Berlusconi reign, Milan fans were viewed in a moderate/right political eye. Today, these divisions are anachronistic.
The traditional ultras group of Inter is Boys San; they hold a significant place in the history of the ultras scene in general due to the fact that they are one of the oldest, being founded in 1969. Politically, one group (Irriducibili) of Inter Ultras are right-wing and this group has good relationships with the Lazio ultras. As well as the main group (apolitical) of Boys San, there are five more significant groups: Viking (apolitical), Irriducibili (right-wing), Ultras (apolitical), Brianza Alcoolica (apolitical) and Imbastisci (left-wing).
Inter's most vocal fans are known to gather in the Curva Nord, or north curve of the San Siro. This longstanding tradition has led to the Curva Nord being synonymous with the club's most die-hard supporters, who unfurl banners and wave flags in support of their team.
Inter have several rivalries, two of which are highly significant in Italian football; firstly, they participate in the intracity Derby della Madonnina with Milan; the rivalry has existed ever since Inter splintered off from Milan in 1908. The name of the derby refers to the Blessed Virgin Mary, whose statue atop the Milan Cathedral is one of the city's main attractions. The match usually creates a lively atmosphere, with numerous (often humorous or offensive) banners unfolded before the match. Flares are commonly present, but they also led to the abandonment of the second leg of the 2004–05 Champions League quarter-final matchup between Milan and Inter on 12 April, after a flare thrown from the crowd by an Inter supporter struck Milan keeper Dida on the shoulder.
The other significant rivalry is with Juventus; matches between the two clubs are known as the Derby d'Italia. Up until the 2006 Italian football scandal, which saw Juventus relegated, the two were the only Italian clubs never to have played below Serie A. In the 2000s, Inter developed a rivalry with Roma, who finished as runners-up to Inter in all but one of Inter's five Scudetto-winning seasons between 2005–06 and 2009–10. The two sides have also contested in five Coppa Italia finals and four Supercoppa Italiana finals since 2006. Other clubs, like Atalanta and Napoli, are also considered among their rivals. Their supporters collectively go by Interisti, or Nerazzurri.
Inter have won 35 domestic trophies, including the Serie A 19 times, the Coppa Italia nine times and the Supercoppa Italiana seven times. From 2006 to 2010, the club won five successive league titles, equalling the all-time record before 2017, when Juventus won the sixth successive league title. They have won the UEFA Champions League three times: two back-to-back in 1964 and 1965 and then another in 2010; the last completed an unprecedented Italian treble with the Coppa Italia and the Scudetto. The club has also won three UEFA Europa League, two Intercontinental Cup and one FIFA Club World Cup.
Inter has never been relegated from the top flight of Italian football in its entire existence. It is the sole club to have competed in Serie A and its predecessors in every season since its debut in 1909.
Javier Zanetti holds the records for both total appearances and Serie A appearances for Inter, with 858 official games played in total and 618 in Serie A.
Giuseppe Meazza is Inter's all-time top goalscorer, with 284 goals in 408 games. Behind him, in second place, is Alessandro Altobelli with 209 goals in 466 games, and Roberto Boninsegna in third place, with 171 goals over 281 games.
Helenio Herrera had the longest reign as Inter coach, with nine years (eight consecutive) in charge, and is the most successful coach in Inter history with three Scudetti, two European Cups, and two Intercontinental Cup wins. José Mourinho, who was appointed on 2 June 2008, completed his first season in Italy by winning the Serie A title and the Supercoppa Italiana; in his second season he won the first "treble" in Italian history: the Serie A, Coppa Italia and the UEFA Champions League.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Inter Primavera players that received a first-team squad call-up.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
3 – Giacinto Facchetti, left back, played for Inter 1960–1978 (posthumous honour). The number was retired on 8 September 2006, four days after Facchetti had died from cancer aged 64. The last player to wear the number 3 shirt was Argentinian center back Nicolás Burdisso, who took on the number 16 shirt for the rest of the season.
4 – Javier Zanetti, defensive midfielder, played 858 games for Inter between 1995 and his retirement in the summer of 2014. In June 2014, club chairman Erick Thohir confirmed that Zanetti's number 4 was to be retired out of respect.
Below is a list of Inter chairmen from 1908 until the present day.
Below is a list of Inter coaches from 1909 until the present day.
FC Internazionale Milano S.p.A. was described as one of the financial "black-holes" among the Italian clubs, which was heavily dependent on the financial contribution from the owner Massimo Moratti. In June 2006, the shirt sponsor and the minority shareholder of the club, Pirelli, sold 15.26% shares of the club to Moratti family, for €13.5 million. The tyre manufacturer retained 4.2%. However, due to several capital increases of Inter, such as a reversed merger with an intermediate holding company, Inter Capital S.r.l. in 2006, which held 89% shares of Inter and €70 million capitals at that time, or issues new shares for €70.8 million in June 2007, €99.9 million in December 2007, €86.6 million in 2008, €70 million in 2009, €40 million in 2010 and 2011, €35 million in 2012 or allowing Thoir subscribed €75 million new shares of Inter in 2013, Pirelli became the third largest shareholders of just 0.5%, as of 31 December 2015. Inter had yet another recapitalization that was reserved for Suning Holdings Group in 2016. In the prospectus of Pirelli's second IPO in 2017, the company also revealed that the value of the remaining shares of Inter that was owned by Pirelli, was write-off to zero in 2016 financial year. Inter also received direct capital contribution from the shareholders to cover loss which was excluded from issuing shares in the past. (Italian: versamenti a copertura perdite)
Right before the takeover of Thohir, the consolidated balance sheets of "Internazionale Holding S.r.l." showed the whole companies group had a bank debt of €157 million, including the bank debt of a subsidiary "Inter Brand Srl", as well as the club itself, to Istituto per il Credito Sportivo (ICS), for €15.674 million on the balance sheet at end of 2012–13 financial year. In 2006 Inter sold its brand to the new subsidiary, "Inter Brand S.r.l.", a special purpose entity with a shares capital of €40 million, for €158 million (the deal made Internazionale make a net loss of just €31 million in a separate financial statement). At the same time the subsidiary secured a €120 million loan from Banca Antonveneta, which would be repaid in installments until 30 June 2016; La Repubblica described the deal as "doping". In September 2011 Inter secured a loan from ICS by factoring the sponsorship of Pirelli of 2012–13 and 2013–14 season, for €24.8 million, in an interest rate of 3 months Euribor + 1.95% spread. In June 2014 new Inter Group secured €230 million loan from Goldman Sachs and UniCredit at a new interest rate of 3 months Euribor + 5.5% spread, as well as setting up a new subsidiary to be the debt carrier: "Inter Media and Communication S.r.l.". €200 million of which would be utilized in debt refinancing of the group. The €230million loan, €1 million (plus interests) would be due on 30 June 2015, €45 million (plus interests) would be repaid in 15 installments from 30 September 2015 to 31 March 2019, as well as €184 million (plus interests) would be due on 30 June 2019. In ownership side, the Hong Kong-based International Sports Capital HK Limited, had pledged the shares of Italy-based International Sports Capital S.p.A. (the direct holding company of Inter) to CPPIB Credit Investments for €170 million in 2015, at an interest rate of 8% p.a (due March 2018) to 15% p.a. (due March 2020). ISC repaid the notes on 1 July 2016 after they sold part of the shares of Inter to Suning Holdings Group. However, in the late 2016 the shares of ISC S.p.A. was pledged again by ISC HK to private equity funds of OCP Asia for US$80 million. In December 2017, the club also refinanced its debt of €300 million, by issuing corporate bond to the market, via Goldman Sachs as the bookkeeper, for an interest rate of 4.875% p.a.
Considering revenue alone, Inter surpassed city rivals in Deloitte Football Money League for the first time, in the 2008–2009 season, to rank in ninth place, one place behind Juventus in eighth place, with Milan in tenth place. In the 2009–10 season, Inter remained in ninth place, surpassing Juventus (10th) but Milan re-took the leading role as the seventh. Inter became the eighth in 2010–2011, but was still one place behind Milan. Since 2011, Inter fell to 11th in 2011–12, 15th in 2012–13, 17th in 2013–14, 19th in 2014–15 and 2015–16 season. In 2016–17 season, Inter was ranked 15th in the Money League.
In 2010 Football Money League (2008–09 season), the normalized revenue of €196.5 million were divided up between matchday (14%, €28.2 million), broadcasting (59%, €115.7 million, +7%, +€8 million) and commercial (27%, €52.6 million, +43%). Kit sponsors Nike and Pirelli contributed €18.1 million and €9.3 million respectively to commercial revenues, while broadcasting revenues were boosted €1.6 million (6%) by Champions League distribution. Deloitte expressed the idea that issues in Italian football, particularly matchday revenue issues were holding Inter back compared to other European giants, and developing their own stadia would result in Serie A clubs being more competitive on the world stage.
In 2009–10 season the revenue of Inter was boosted by the sales of Ibrahimović, the treble and the release clause of coach José Mourinho. According to the normalized figures by Deloitte in their 2011 Football Money League, in 2009–10 season, the revenue had increased €28.3 million (14%) to €224.8 million. The ratio of matchday, broadcasting and commercial in the adjusted figures was 17%:62%:21%.
For the 2010–11 season, Serie A clubs started negotiating club TV rights collectively rather than individually. This was predicted to result in lower broadcasting revenues for big clubs such as Juventus and Inter, with smaller clubs gaining from the loss. Eventually the result included an extraordinary income of €13 million from RAI. In 2012 Football Money League (2010–11 season), the normalized revenue was €211.4 million. The ratio of matchday, broadcasting and commercial in the adjusted figures was 16%:58%:26%.
However, combining revenue and cost, in the 2006–07 season they had a net loss of €206 million (€112 million extraordinary basis, due to the abolition of non-standard accounting practice of the special amortization fund), followed by a net loss of €148 million in the 2007–08 season, a net loss of €154 million in 2008–09 season, a net loss of €69 million in the 2009–10 season, a net loss of €87 million in the 2010–11 season, a net loss of €77 million in the 2011–12 season, a net loss of €80 million in 2012–13 season and a net profit of €33 million in 2013–14 season, due to special income from the establishment of subsidiary Inter Media and Communication. All aforementioned figures were in separate financial statement. Figures from consolidated financial statement were announced since 2014–15 season, which were net losses of €140.4 million (2014–15), €59.6 million (2015–16 season, before 2017 restatement) and €24.6 million (2016–17).
In 2015 Inter and Roma were the only two Italian clubs that were sanctioned by the UEFA due to their breaking of UEFA Financial Fair Play Regulations, which was followed by AC Milan which was once barred from returning to European competition in 2018. As a probation to avoid further sanction, Inter agreed to have a three-year aggregate break-even from 2015 to 2018, with the 2015–16 season being allowed to have a net loss of a maximum of €30 million, followed by break-even in the 2016–17 season and onwards. Inter was also fined €6 million plus an additional €14 million in probation.
Inter also made a financial trick in the transfer market in mid-2015, in which Stevan Jovetić and Miranda were signed by Inter on temporary deals plus an obligation to sign outright in 2017, making their cost less in the loan period. Moreover, despite heavily investing in new signings, namely Geoffrey Kondogbia and Ivan Perišić that potentially increased the cost in amortization, Inter also sold Mateo Kovačić for €29 million, making a windfall profit. In November 2018, documents from Football Leaks further revealed that the loan signings such as Xherdan Shaqiri in January 2015, was in fact had inevitable conditions to trigger the outright purchase.
On 21 April 2017, Inter announced that their net loss (FFP adjusted) of 2015–16 season was within the allowable limit of €30 million. However, on the same day UEFA also announced that the reduction of squad size of Inter in European competitions would not be lifted yet, due to partial fulfilment of the targets in the settlement agreement. Same announcement was made by UEFA in June 2018, based on Inter's 2016–17 season financial result.
In February 2020, Inter Milan sued Major League Soccer (MLS) for trademark infringement, claiming that the term "Inter" is synonymous with its club and no one else. | [
{
"paragraph_id": 0,
"text": "Football Club Internazionale Milano, commonly referred to as Internazionale (pronounced [ˌinternattsjoˈnaːle]) or simply Inter, and colloquially known as Inter Milan in English-speaking countries, is an Italian professional football club based in Milan, Lombardy. Inter is the only Italian side to have always competed in the top flight of Italian football since its debut in 1909.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Founded in 1908 following a schism within the Milan Cricket and Football Club (now AC Milan), Inter won its first championship in 1910. Since its formation, the club has won 35 domestic trophies, including 19 league titles, 9 Coppa Italia, and 7 Supercoppa Italiana. From 2006 to 2010, the club won five successive league titles, equalling the all-time record at that time. They have won the European Cup/Champions League three times: two back-to-back in 1964 and 1965, and then another in 2010. Their latest win completed an unprecedented Italian seasonal treble, with Inter winning the Coppa Italia and the Scudetto the same year. The club has also won three UEFA Cups, two Intercontinental Cups and one FIFA Club World Cup.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Inter's home games are played at the San Siro stadium, which they share with city rivals AC Milan. The stadium is the largest in Italian football with a capacity of 75,817. They have long-standing rivalries with Milan, with whom they contest the Derby della Madonnina, and Juventus, with whom they contest the Derby d'Italia; their rivalry with the former is one of the most followed derbies in football. As of 2019, Inter has the highest home game attendance in Italy and the sixth highest attendance in Europe. Since 2016, the club has been majority-owned by Chinese holding company Suning Holdings Group. Inter is one of the most valuable clubs in Italian and world football.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The club was founded on 9 March 1908 as Football Club Internazionale, following the schism with the Milan Cricket and Football Club (now AC Milan). The name of the club derives from the wish of its founding members to accept foreign players without limits as well as Italians.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The club won its first championship in 1910 and its second in 1920. The captain and coach of the first championship winning team was Virgilio Fossati, who was later killed in battle while serving in the Italian army during World War I. In 1922, Inter was at risk of relegation to the second division, but they remained in the top league after winning two play-offs.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Six years later, during the Fascist era, the club was forced to merge with the Unione Sportiva Milanese and was renamed Società Sportiva Ambrosiana. During the 1928–29 season, the team wore white jerseys with a red cross emblazoned on it; the jersey's design was inspired by the flag and coat of arms of the city of Milan. In 1929, the new club chairman Oreste Simonotti changed the club's name to Associazione Sportiva Ambrosiana and restored the previous black-and-blue jerseys, however supporters continued to call the team Inter, and in 1931 new chairman Pozzani caved in to shareholder pressure and changed the name to Associazione Sportiva Ambrosiana-Inter.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Their first Coppa Italia (Italian Cup) was won in 1938–39, led by the iconic Giuseppe Meazza, after whom the San Siro stadium is officially named. A fifth championship followed in 1940, despite Meazza incurring an injury. After the end of World War II the club regained its original name, winning its sixth championship in 1953 and its seventh in 1954.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1960, manager Helenio Herrera joined Inter from Barcelona, bringing with him his midfield general Luis Suárez, who won the European Footballer of the Year in the same year for his role in Barcelona's La Liga/Fairs Cup double. He would transform Inter into one of the greatest teams in Europe. He modified a 5–3–2 tactic known as the \"Verrou\" (\"door bolt\") which created greater flexibility for counterattacks. The catenaccio system was invented by an Austrian coach, Karl Rappan. Rappan's original system was implemented with four fixed defenders, playing a strict man-to-man marking system, plus a playmaker in the middle of the field who plays the ball together with two midfield wings. Herrera would modify it by adding a fifth defender, the sweeper or libero behind the two centre backs. The sweeper or libero who acted as the free man would deal with any attackers who went through the two centre backs. Inter finished third in the Serie A in his first season, second the next year and first in his third season. Then followed a back-to-back European Cup victory in 1964 and 1965, earning him the title \"il Mago\" (\"the Wizard\"). The core of Herrera's team were the attacking fullbacks Tarcisio Burgnich and Giacinto Facchetti, Armando Picchi the sweeper, Suárez the playmaker, Jair the winger, Mario Corso the left midfielder, and Sandro Mazzola, who played on the inside-right.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1964, Inter reached the European Cup Final by beating Borussia Dortmund in the semi-final and Partizan in the quarter-final. In the final, they met Real Madrid, a team that had reached seven out of the nine finals to date. Mazzola scored two goals in a 3–1 victory, and then the team won the Intercontinental Cup against Independiente.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "A year later, Inter repeated the feat by beating two-time winner Benfica in the final held at home, from a Jair goal after have defeated Liverpool in semifinals recovery from a 3–1 with a 3–0, and then again beat Independiente in the Intercontinental Cup becoming the first European team to win two times in a row the competition. Inter in 1965 came close to win Treble for the first time in European football history after have won also Serie A title but lost 1965 Coppa Italia final played on 29 August 1965.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Inter reached again semifinals in 1966 but this time lost against Real Madrid that later will win the tournament.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1967 after Inter have eliminated Real Madrid in quarterfinals, with Suárez injured, Inter lost the European Cup Final on Lisbon 2–1 to Celtic. During that year the club changed its name to Football Club Internazionale Milano.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Following the golden era of the 1960s, Inter managed to win their eleventh league title in 1971 and their twelfth in 1980. Inter were defeated for the second time in five years in the final of the European Cup, going down 0–2 to Johan Cruyff's Ajax in 1972. During the 1970s and the 1980s, Inter also added two to its Coppa Italia tally, in 1977–78 and 1981–82.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Hansi Müller (1975–1982 VfB Stuttgart, 1982–1984 Inter Milan) and Karl-Heinz Rummenigge (1974–1984 Bayern Munich, 1984–1987 Inter Milan) played for Inter Milan. Led by the German duo of Andreas Brehme and Lothar Matthäus, and Argentine Ramón Díaz, Inter captured the 1989 Serie A championship. Inter were unable to defend their title despite adding fellow German Jürgen Klinsmann to the squad and winning their first Supercoppa Italiana at the start of the season.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The 1990s was a period of disappointment. While their great rivals Milan and Juventus were achieving success both domestically and in Europe, Inter were left behind, with repeated mediocre results in the domestic league standings, their worst coming in 1993–94 when they finished just one point out of the relegation zone. Nevertheless, they achieved some European success with three UEFA Cup victories in 1991, 1994 and 1998.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "With Massimo Moratti's takeover from Ernesto Pellegrini in 1995, Inter twice broke the world record transfer fee in this period (£19.5 million for Ronaldo from Barcelona in 1997 and £31 million for Christian Vieri from Lazio two years later). However, the 1990s remained the only decade in Inter's history, alongside the 1940s, in which they did not win a single Serie A championship. For Inter fans, it was difficult to find who in particular was to blame for the troubled times and this led to some icy relations between them and the chairman, the managers and even some individual players.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Moratti later became a target of the fans, especially when he sacked the much-loved coach Luigi Simoni after only a few games into the 1998–99 season, having just received the Italian manager of the year award for 1998 the day before being dismissed. That season, Inter failed to qualify for any European competition for the first time in almost ten years, finishing in eighth place.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The following season, Moratti appointed former Juventus manager Marcello Lippi, and signed players such as Angelo Peruzzi and Laurent Blanc together with other former Juventus players Vieri and Vladimir Jugović. The team came close to their first domestic success since 1989 when they reached the Coppa Italia final only to be defeated by Lazio.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Inter's misfortunes continued the following season, losing the 2000 Supercoppa Italiana match against Lazio 4–3 after initially taking the lead through new signing Robbie Keane. They were also eliminated in the preliminary round of the Champions League by Swedish club Helsingborgs IF, with Álvaro Recoba missing a crucial late penalty. Lippi was sacked after only a single game of the new season following Inter's first ever Serie A defeat to Reggina. Marco Tardelli, chosen to replace Lippi, failed to improve results, and is remembered by Inter fans as the manager that lost 6–0 in the city derby against Milan. Other members of the Inter \"family\" during this period that suffered were the likes of Vieri and Fabio Cannavaro, both of whom had their restaurants in Milan vandalised after defeats to the Rossoneri.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 2002, not only did Inter manage to make it to the UEFA Cup semi-finals, but were also only 45 minutes away from capturing the Scudetto when they needed to maintain their one-goal advantage away to Lazio. Inter were 2–1 up after only 24 minutes. Lazio equalised during first half injury time and then scored two more goals in the second half to clinch victory that eventually saw Juventus win the championship. The next season, Inter finished as league runners-up and also managed to make it to the 2002–03 Champions League semi-finals against Milan, losing on the away goals rule.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "On 8 July 2004, Inter appointed former Lazio coach Roberto Mancini as its new head coach. In his first season, the team collected 72 points from 18 wins, 18 draws and only two losses, as well as winning the Coppa Italia and later the Supercoppa Italiana. On 11 May 2006, Inter retained their Coppa Italia title once again after defeating Roma with a 4–1 aggregate victory (a 1–1 scoreline in Rome and a 3–1 win at the San Siro).",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Inter were awarded the 2005–06 Serie A championship retrospectively after title-winning Juventus was relegated and points were stripped from Milan due to the Calciopoli scandal. During the following season, Inter went on a record-breaking run of 17 consecutive victories in Serie A, starting on 25 September 2006 with a 4–1 home victory over Livorno, and ending on 28 February 2007, after a 1–1 draw at home to Udinese. On 22 April 2007, Inter won their second consecutive Scudetto—and first on the field since 1989—when they defeated Siena 2–1 at Stadio Artemio Franchi. Italian World Cup-winning defender Marco Materazzi scored both goals.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Inter started the 2007–08 season with the goal of winning both Serie A and Champions League. The team started well in the league, topping the table from the first round of matches, and also managed to qualify for the Champions League knockout stage. However, a late collapse, leading to a 2–0 defeat with ten men away to Liverpool on 19 February in the Champions League, threw into question manager Roberto Mancini's future at Inter while domestic form took a sharp turn of fortune with the team failing to win in the three following Serie A games. After being eliminated by Liverpool in the Champions League, Mancini announced his intention to leave his job immediately only to change his mind the following day. On the final day of the 2007–08 Serie A season, Inter played Parma away, and two goals from Zlatan Ibrahimović sealed their third consecutive championship. Mancini, however, was sacked soon after due to his previous announcement to leave the club.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "On 2 June 2008, Inter appointed former Porto and Chelsea boss José Mourinho as new head coach. In his first season, the Nerazzurri won a Suppercoppa Italiana and a fourth consecutive title, though falling in the Champions League in the first knockout round for a third-straight year, losing to eventual finalist Manchester United. In winning the league title Inter became the first club in the last 60 years to win the title for the fourth consecutive time and joined Torino and Juventus as the only clubs to accomplish this feat, as well as being the first club based outside Turin.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Inter won the 2009–10 Champions League, defeating reigning champions Barcelona in the semi-final before beating Bayern Munich 2–0 in the final with two goals from Diego Milito. Inter also won the 2009–10 Serie A title by two points over Roma, and the 2010 Coppa Italia by defeating the same side 1–0 in the final. This made Inter the first Italian team to win the treble. At the end of the season, Mourinho left the club to manage Real Madrid; he was replaced by Rafael Benítez.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "On 21 August 2010, Inter defeated Roma 3–1 and won the 2010 Supercoppa Italiana, their fourth trophy of the year. In December 2010, they claimed the FIFA Club World Cup for the first time after a 3–0 win against TP Mazembe in the final. However, after this win, on 23 December 2010, due to their declining performance in Serie A, the team fired Benítez. He was replaced by Leonardo the following day.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Leonardo started with 30 points from 12 games, with an average of 2.5 points per game, better than his predecessors Benítez and Mourinho. On 6 March 2011, Leonardo set a new Italian Serie A record by collecting 33 points in 13 games; the previous record was 32 points in 13 games made by Fabio Capello in the 2004–05 season. Leonardo led the club to the quarter-finals of the Champions League before losing to Schalke 04, and lead them to Coppa Italia title. At the end of the season, however, he resigned and was followed by new managers Gian Piero Gasperini, Claudio Ranieri and Andrea Stramaccioni, all hired during the following season.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "On 1 August 2012, the club announced that Moratti was to sell a minority interest of the club to a Chinese consortium led by Kenneth Huang. On the same day, Inter announced an agreement was formed with China Railway Construction Corporation Limited for a new stadium project, however, the deal with the Chinese eventually collapsed. The 2012–13 season was the worst in recent club history with Inter finishing ninth in Serie A and failing to qualify for any European competitions. Walter Mazzarri was appointed to replace Stramaccioni as the manager for 2013–14 season on 24 May 2013, having ended his tenure at Napoli. He guided the club to fifth in Serie A and to 2014–15 UEFA Europa League qualification.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "On 15 October 2013, an Indonesian consortium (International Sports Capital HK Ltd.) led by Erick Thohir, Handy Soetedjo and Rosan Roeslani, signed an agreement to acquire 70% of Inter shares from Internazionale Holding S.r.l. Immediately after the deal, Moratti's Internazionale Holding S.r.l. still retained 29.5% of the shares of FC Internazionale Milano S.p.A. After the deal, the shares of Inter was owned by a chain of holding companies, namely International Sports Capital S.p.A. of Italy (for 70% stake), International Sports Capital HK Limited and Asian Sports Ventures HK Limited of Hong Kong. Asian Sports Ventures HK Limited, itself another intermediate holding company, was owned by Nusantara Sports Ventures HK Limited (60% stake, a company owned by Thohir), Alke Sports Investment HK Limited (20% stake) and Aksis Sports Capital HK Limited (20% stake).",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Thohir, who also co-owned Major League Soccer (MLS) club D.C. United and Indonesia Super League (ISL) club Persib Bandung, announced on 2 December 2013 that Inter and D.C. United had formed a strategic partnership. During the Thohir era the club began to modify its financial structure from one reliant on continual owner investment to a more self sustain business model although the club still breached UEFA Financial Fair Play Regulations in 2015. The club was fined and received squad reduction in UEFA competitions, with additional penalties suspended in the probation period. During this time, Roberto Mancini returned as the club manager on 14 November 2014, with Inter finishing eighth. Inter finished 2015–2016 season fourth, failing to return to Champions League.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "On 6 June 2016, Suning Holdings Group (via a Luxembourg-based subsidiary Great Horizon S.á r.l.) a company owned by Zhang Jindong, co-founder and chairman of Suning Commerce Group, acquired a majority stake of Inter from Thohir's consortium International Sports Capital S.p.A. and from Moratti family's remaining shares in Internazionale Holding S.r.l. According to various filings, the total investment from Suning was €270 million. The deal was approved by an extraordinary general meeting on 28 June 2016, from which Suning Holdings Group had acquired a 68.55% stake in the club.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The first season of new ownership, however, started with poor performance in pre-season friendlies. On 8 August 2016, Inter parted company with head coach Roberto Mancini by mutual consent over disagreements regarding the club's direction. He was replaced by Frank de Boer who was sacked on 1 November 2016 after leading Inter to a 4W–2D–5L record in 11 Serie A games as head coach. The successor, Stefano Pioli, didn't save the team from getting the worst group result in UEFA competitions in the club's history. Despite an eight-game winning streak, he and the club parted away before season's end when it became clear they would finish outside the league's top three for the sixth consecutive season. On 9 June 2017, former Roma coach Luciano Spalletti was appointed as Inter manager, signing a two-year contract, and eleven months later Inter clinched a UEFA Champions League group stage spot after going six years without Champions League participation thanks to a 3–2 victory against Lazio in the final game of 2017–18 Serie A. Due to this success, in August the club extended the contract with Spalletti to 2021.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "On 26 October 2018, Steven Zhang was appointed as new president of the club. On 25 January 2019, the club officially announced that LionRock Capital from Hong Kong reached an agreement with International Sports Capital HK Limited, in order to acquire its 31.05% shares in Inter and to become the club's new minority shareholder. After the 2018–19 Serie A season, despite Inter finishing fourth, Spalletti was sacked. In May 2021, American investment firm Oaktree Capital loaned Inter $336 million to cover losses incurred during the COVID-19 pandemic.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "On 31 May 2019, Inter appointed former Juventus and Italian manager Antonio Conte as their new coach, signing a three-year deal. In September 2019, Steven Zhang was elected to the board of the European Club Association. In the 2019–20 Serie A, Inter Milan finished as runner-up as they won 2–0 against Atalanta on the last matchday. They also reached the 2020 UEFA Europa League Final, ultimately losing 3–2 to Sevilla. Following Atalanta's draw against Sassuolo on 2 May 2021, Internazionale were confirmed as champions for the first time in eleven years, ending Juventus' run of nine consecutive titles. However, despite securing Serie A glory, Conte left the club by mutual consent on 26 May 2021. The departure was reportedly due to disagreements between Conte and the board over player transfers. In June 2021, Simone Inzaghi was appointed as Conte's replacement. On 8 August 2021, Romelu Lukaku was sold to Chelsea F.C. for €115 million, representing the most expensive association football transfer by an Italian football club ever.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "On 12 January 2022, Inter won the Supercoppa Italiana, defeating Juventus 2–1 at San Siro. After conceding a goal to the opponent, Inter equalised with a penalty scored by Lautaro Martínez, and the match finished 1–1 in regulation time. In the last second of the extra-time, Alexis Sánchez scored the winning goal following a defensive error, giving Inter the first trophy of the season, also Simone Inzaghi's first trophy as Inter manager. On 11 May 2022, Inter won the Coppa Italia defeating Juventus 4–2 at Stadio Olimpico. After normal time had ended 2–2, with Nicolò Barella and Hakan Çalhanoğlu scoring Inter's goals, Ivan Perišić's brace in the extra-time gave Inter the win and the second title of the season. The 2021–22 Serie A campaign saw Inter finish in second place, being the most prolific attacking side with 84 goals. On 18 January 2023, Inter won the Supercoppa Italiana, defeating Milan 3−0 at King Fahd International Stadium, thanks to goals from Federico Dimarco, Edin Džeko, and Lautaro Martinez.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "On 16 May 2023, Inter won against Milan in the semi-finals of 2022–23 UEFA Champions League and qualified for the final, the first time they have reached the final in the UEFA Champions League since 2010. However, they were defeated at the Atatürk Olympic Stadium 1−0 by Manchester City after a second half goal from Rodri.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "One of the founders of Inter, a painter named Giorgio Muggiani, was responsible for the design of the first Inter logo in 1908. The first design incorporated the letters \"FCIM\" in the centre of a series of circles that formed the badge of the club. The basic elements of the design have remained constant even as finer details have been modified over the years. Starting at the 1999–2000 season, the original club crest was reduced in size, to give place for the addition of the club's name and foundation year at the upper and lower part of the logo respectively.",
"title": "Colours and badge"
},
{
"paragraph_id": 37,
"text": "In 2007, the logo was returned to the pre-1999–2000 era. It was given a more modern look with a smaller Scudetto star and lighter color scheme. This version was used until July 2014, when the club decided to undertake a rebranding. The most significant difference between the current and the previous logo is the omission of the star from other media except match kits.",
"title": "Colours and badge"
},
{
"paragraph_id": 38,
"text": "Since its founding in 1908, Inter have almost always worn black and blue stripes, earning them the nickname Nerazzurri. According to the tradition, the colours were adopted to represent the nocturnal sky: in fact, the club was established on the night of 9 March, at 23:30; moreover, blue was chosen by Giorgio Muggiani because he considered it to be the opposite colour to red, worn by the Milan Cricket and Football Club rivals.",
"title": "Colours and badge"
},
{
"paragraph_id": 39,
"text": "During the 1928–29 season, however, Inter were forced to abandon their black and blue uniforms. In 1928, Inter's name and philosophy made the ruling Fascist Party uneasy; as a result, during the same year the 20-year-old club was merged with Unione Sportiva Milanese: the new club was named Società Sportiva Ambrosiana after the patron saint of Milan. The flag of Milan (the red cross on white background) replaced the traditional black and blue. In 1929 the black-and-blue jerseys were restored, and after World War II, when the Fascists had fallen from power, the club reverted to their original name. In 2008, Inter celebrated their centenary with a red cross on their away shirt. The cross is reminiscent of the flag of their city, and they continue to use the pattern on their third kit. In 2014, the club adopted a predominantly black home kit with thin blue pinstripes before returning to a more traditional design the following season.",
"title": "Colours and badge"
},
{
"paragraph_id": 40,
"text": "Animals are often used to represent football clubs in Italy – the grass snake, called Biscione, represents Inter. The snake is an important symbol for the city of Milan, appearing often in Milanese heraldry as a coiled viper with a man in its jaws. The symbol is present on the coat of arms of the House of Sforza (which ruled over Italy from Milan during the Renaissance period), the city of Milan, the historical Duchy of Milan (a 400-year state of the Holy Roman Empire) and Insubria (a historical region the city of Milan falls within). For the 2010–11 season, Inter's away kit featured the serpent.",
"title": "Colours and badge"
},
{
"paragraph_id": 41,
"text": "The team's stadium is the 75,923 seat San Siro, officially known as the Stadio Giuseppe Meazza after the former player who represented both Milan and Inter. The more commonly used name, San Siro, is the name of the district where it is located. San Siro has been the home of Milan since 1926, when it was privately built by funding from Milan's chairman at the time, Piero Pirelli. Construction was performed by 120 workers, and took 13+1⁄2 months to complete. The stadium was owned by the club until it was sold to the city in 1935, and since 1947 it has been shared with Inter, when they were accepted as joint tenant.",
"title": "Stadium"
},
{
"paragraph_id": 42,
"text": "The first game played at the stadium was on 19 September 1926, when Inter beat Milan 6–3 in a friendly match. Milan played its first league game in San Siro on 19 September 1926, losing 1–2 to Sampierdarenese. From an initial capacity of 35,000 spectators, the stadium has undergone several major renovations. A major structural renovation was made for the 2016 UEFA Champions League Final while another one took place in late 2021 to host the UEFA Nations League final. The stadium is going to be refurbished again in time for Milano Cortina 2026.",
"title": "Stadium"
},
{
"paragraph_id": 43,
"text": "Based on the English model for stadiums, San Siro is specifically designed for football matches, as opposed to many multi-purpose stadiums used in Serie A. It is therefore renowned in Italy for its fantastic atmosphere during matches owing to the closeness of the stands to the pitch.",
"title": "Stadium"
},
{
"paragraph_id": 44,
"text": "Since 2012, various proposals and projects by Massimo Moratti have alternated regarding a possible construction of a new Inter stadium. Between June and July 2019, Inter and Milan announced the agreement for the construction of a new shared stadium in the San Siro area. In the winter of 2021, Giuseppe Sala, the mayor of Milan, gave the official permission for the construction of the new stadium next to San Siro that will be partially demolished and refunctionalised after the 2026 Olympic Games. In early 2022, Inter and Milan revealed a \"plan B\" to relocate the construction of the new Milano stadium in the Greater Milan, away from San Siro area.",
"title": "Stadium"
},
{
"paragraph_id": 45,
"text": "Inter is one of the most supported clubs in Italy, according to an August 2007 research by Italian newspaper La Repubblica. In the early years (until the First World War), Inter fans from the city of Milan were typically middle class, while Milan fans were typically working class. During Massimo Moratti ownership Inter fans were viewed in a moderate left-political eye. At the same time during Silvio Berlusconi reign, Milan fans were viewed in a moderate/right political eye. Today, these divisions are anachronistic.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 46,
"text": "The traditional ultras group of Inter is Boys San; they hold a significant place in the history of the ultras scene in general due to the fact that they are one of the oldest, being founded in 1969. Politically, one group (Irriducibili) of Inter Ultras are right-wing and this group has good relationships with the Lazio ultras. As well as the main group (apolitical) of Boys San, there are five more significant groups: Viking (apolitical), Irriducibili (right-wing), Ultras (apolitical), Brianza Alcoolica (apolitical) and Imbastisci (left-wing).",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 47,
"text": "Inter's most vocal fans are known to gather in the Curva Nord, or north curve of the San Siro. This longstanding tradition has led to the Curva Nord being synonymous with the club's most die-hard supporters, who unfurl banners and wave flags in support of their team.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 48,
"text": "Inter have several rivalries, two of which are highly significant in Italian football; firstly, they participate in the intracity Derby della Madonnina with Milan; the rivalry has existed ever since Inter splintered off from Milan in 1908. The name of the derby refers to the Blessed Virgin Mary, whose statue atop the Milan Cathedral is one of the city's main attractions. The match usually creates a lively atmosphere, with numerous (often humorous or offensive) banners unfolded before the match. Flares are commonly present, but they also led to the abandonment of the second leg of the 2004–05 Champions League quarter-final matchup between Milan and Inter on 12 April, after a flare thrown from the crowd by an Inter supporter struck Milan keeper Dida on the shoulder.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 49,
"text": "The other significant rivalry is with Juventus; matches between the two clubs are known as the Derby d'Italia. Up until the 2006 Italian football scandal, which saw Juventus relegated, the two were the only Italian clubs never to have played below Serie A. In the 2000s, Inter developed a rivalry with Roma, who finished as runners-up to Inter in all but one of Inter's five Scudetto-winning seasons between 2005–06 and 2009–10. The two sides have also contested in five Coppa Italia finals and four Supercoppa Italiana finals since 2006. Other clubs, like Atalanta and Napoli, are also considered among their rivals. Their supporters collectively go by Interisti, or Nerazzurri.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 50,
"text": "Inter have won 35 domestic trophies, including the Serie A 19 times, the Coppa Italia nine times and the Supercoppa Italiana seven times. From 2006 to 2010, the club won five successive league titles, equalling the all-time record before 2017, when Juventus won the sixth successive league title. They have won the UEFA Champions League three times: two back-to-back in 1964 and 1965 and then another in 2010; the last completed an unprecedented Italian treble with the Coppa Italia and the Scudetto. The club has also won three UEFA Europa League, two Intercontinental Cup and one FIFA Club World Cup.",
"title": "Honours"
},
{
"paragraph_id": 51,
"text": "Inter has never been relegated from the top flight of Italian football in its entire existence. It is the sole club to have competed in Serie A and its predecessors in every season since its debut in 1909.",
"title": "Honours"
},
{
"paragraph_id": 52,
"text": "Javier Zanetti holds the records for both total appearances and Serie A appearances for Inter, with 858 official games played in total and 618 in Serie A.",
"title": "Club statistics and records"
},
{
"paragraph_id": 53,
"text": "Giuseppe Meazza is Inter's all-time top goalscorer, with 284 goals in 408 games. Behind him, in second place, is Alessandro Altobelli with 209 goals in 466 games, and Roberto Boninsegna in third place, with 171 goals over 281 games.",
"title": "Club statistics and records"
},
{
"paragraph_id": 54,
"text": "Helenio Herrera had the longest reign as Inter coach, with nine years (eight consecutive) in charge, and is the most successful coach in Inter history with three Scudetti, two European Cups, and two Intercontinental Cup wins. José Mourinho, who was appointed on 2 June 2008, completed his first season in Italy by winning the Serie A title and the Supercoppa Italiana; in his second season he won the first \"treble\" in Italian history: the Serie A, Coppa Italia and the UEFA Champions League.",
"title": "Club statistics and records"
},
{
"paragraph_id": 55,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 56,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 57,
"text": "Inter Primavera players that received a first-team squad call-up.",
"title": "Players"
},
{
"paragraph_id": 58,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 59,
"text": "3 – Giacinto Facchetti, left back, played for Inter 1960–1978 (posthumous honour). The number was retired on 8 September 2006, four days after Facchetti had died from cancer aged 64. The last player to wear the number 3 shirt was Argentinian center back Nicolás Burdisso, who took on the number 16 shirt for the rest of the season.",
"title": "Players"
},
{
"paragraph_id": 60,
"text": "4 – Javier Zanetti, defensive midfielder, played 858 games for Inter between 1995 and his retirement in the summer of 2014. In June 2014, club chairman Erick Thohir confirmed that Zanetti's number 4 was to be retired out of respect.",
"title": "Players"
},
{
"paragraph_id": 61,
"text": "Below is a list of Inter chairmen from 1908 until the present day.",
"title": "Chairmen and managers"
},
{
"paragraph_id": 62,
"text": "Below is a list of Inter coaches from 1909 until the present day.",
"title": "Chairmen and managers"
},
{
"paragraph_id": 63,
"text": "FC Internazionale Milano S.p.A. was described as one of the financial \"black-holes\" among the Italian clubs, which was heavily dependent on the financial contribution from the owner Massimo Moratti. In June 2006, the shirt sponsor and the minority shareholder of the club, Pirelli, sold 15.26% shares of the club to Moratti family, for €13.5 million. The tyre manufacturer retained 4.2%. However, due to several capital increases of Inter, such as a reversed merger with an intermediate holding company, Inter Capital S.r.l. in 2006, which held 89% shares of Inter and €70 million capitals at that time, or issues new shares for €70.8 million in June 2007, €99.9 million in December 2007, €86.6 million in 2008, €70 million in 2009, €40 million in 2010 and 2011, €35 million in 2012 or allowing Thoir subscribed €75 million new shares of Inter in 2013, Pirelli became the third largest shareholders of just 0.5%, as of 31 December 2015. Inter had yet another recapitalization that was reserved for Suning Holdings Group in 2016. In the prospectus of Pirelli's second IPO in 2017, the company also revealed that the value of the remaining shares of Inter that was owned by Pirelli, was write-off to zero in 2016 financial year. Inter also received direct capital contribution from the shareholders to cover loss which was excluded from issuing shares in the past. (Italian: versamenti a copertura perdite)",
"title": "Corporate"
},
{
"paragraph_id": 64,
"text": "Right before the takeover of Thohir, the consolidated balance sheets of \"Internazionale Holding S.r.l.\" showed the whole companies group had a bank debt of €157 million, including the bank debt of a subsidiary \"Inter Brand Srl\", as well as the club itself, to Istituto per il Credito Sportivo (ICS), for €15.674 million on the balance sheet at end of 2012–13 financial year. In 2006 Inter sold its brand to the new subsidiary, \"Inter Brand S.r.l.\", a special purpose entity with a shares capital of €40 million, for €158 million (the deal made Internazionale make a net loss of just €31 million in a separate financial statement). At the same time the subsidiary secured a €120 million loan from Banca Antonveneta, which would be repaid in installments until 30 June 2016; La Repubblica described the deal as \"doping\". In September 2011 Inter secured a loan from ICS by factoring the sponsorship of Pirelli of 2012–13 and 2013–14 season, for €24.8 million, in an interest rate of 3 months Euribor + 1.95% spread. In June 2014 new Inter Group secured €230 million loan from Goldman Sachs and UniCredit at a new interest rate of 3 months Euribor + 5.5% spread, as well as setting up a new subsidiary to be the debt carrier: \"Inter Media and Communication S.r.l.\". €200 million of which would be utilized in debt refinancing of the group. The €230million loan, €1 million (plus interests) would be due on 30 June 2015, €45 million (plus interests) would be repaid in 15 installments from 30 September 2015 to 31 March 2019, as well as €184 million (plus interests) would be due on 30 June 2019. In ownership side, the Hong Kong-based International Sports Capital HK Limited, had pledged the shares of Italy-based International Sports Capital S.p.A. (the direct holding company of Inter) to CPPIB Credit Investments for €170 million in 2015, at an interest rate of 8% p.a (due March 2018) to 15% p.a. (due March 2020). ISC repaid the notes on 1 July 2016 after they sold part of the shares of Inter to Suning Holdings Group. However, in the late 2016 the shares of ISC S.p.A. was pledged again by ISC HK to private equity funds of OCP Asia for US$80 million. In December 2017, the club also refinanced its debt of €300 million, by issuing corporate bond to the market, via Goldman Sachs as the bookkeeper, for an interest rate of 4.875% p.a.",
"title": "Corporate"
},
{
"paragraph_id": 65,
"text": "Considering revenue alone, Inter surpassed city rivals in Deloitte Football Money League for the first time, in the 2008–2009 season, to rank in ninth place, one place behind Juventus in eighth place, with Milan in tenth place. In the 2009–10 season, Inter remained in ninth place, surpassing Juventus (10th) but Milan re-took the leading role as the seventh. Inter became the eighth in 2010–2011, but was still one place behind Milan. Since 2011, Inter fell to 11th in 2011–12, 15th in 2012–13, 17th in 2013–14, 19th in 2014–15 and 2015–16 season. In 2016–17 season, Inter was ranked 15th in the Money League.",
"title": "Corporate"
},
{
"paragraph_id": 66,
"text": "In 2010 Football Money League (2008–09 season), the normalized revenue of €196.5 million were divided up between matchday (14%, €28.2 million), broadcasting (59%, €115.7 million, +7%, +€8 million) and commercial (27%, €52.6 million, +43%). Kit sponsors Nike and Pirelli contributed €18.1 million and €9.3 million respectively to commercial revenues, while broadcasting revenues were boosted €1.6 million (6%) by Champions League distribution. Deloitte expressed the idea that issues in Italian football, particularly matchday revenue issues were holding Inter back compared to other European giants, and developing their own stadia would result in Serie A clubs being more competitive on the world stage.",
"title": "Corporate"
},
{
"paragraph_id": 67,
"text": "In 2009–10 season the revenue of Inter was boosted by the sales of Ibrahimović, the treble and the release clause of coach José Mourinho. According to the normalized figures by Deloitte in their 2011 Football Money League, in 2009–10 season, the revenue had increased €28.3 million (14%) to €224.8 million. The ratio of matchday, broadcasting and commercial in the adjusted figures was 17%:62%:21%.",
"title": "Corporate"
},
{
"paragraph_id": 68,
"text": "For the 2010–11 season, Serie A clubs started negotiating club TV rights collectively rather than individually. This was predicted to result in lower broadcasting revenues for big clubs such as Juventus and Inter, with smaller clubs gaining from the loss. Eventually the result included an extraordinary income of €13 million from RAI. In 2012 Football Money League (2010–11 season), the normalized revenue was €211.4 million. The ratio of matchday, broadcasting and commercial in the adjusted figures was 16%:58%:26%.",
"title": "Corporate"
},
{
"paragraph_id": 69,
"text": "However, combining revenue and cost, in the 2006–07 season they had a net loss of €206 million (€112 million extraordinary basis, due to the abolition of non-standard accounting practice of the special amortization fund), followed by a net loss of €148 million in the 2007–08 season, a net loss of €154 million in 2008–09 season, a net loss of €69 million in the 2009–10 season, a net loss of €87 million in the 2010–11 season, a net loss of €77 million in the 2011–12 season, a net loss of €80 million in 2012–13 season and a net profit of €33 million in 2013–14 season, due to special income from the establishment of subsidiary Inter Media and Communication. All aforementioned figures were in separate financial statement. Figures from consolidated financial statement were announced since 2014–15 season, which were net losses of €140.4 million (2014–15), €59.6 million (2015–16 season, before 2017 restatement) and €24.6 million (2016–17).",
"title": "Corporate"
},
{
"paragraph_id": 70,
"text": "In 2015 Inter and Roma were the only two Italian clubs that were sanctioned by the UEFA due to their breaking of UEFA Financial Fair Play Regulations, which was followed by AC Milan which was once barred from returning to European competition in 2018. As a probation to avoid further sanction, Inter agreed to have a three-year aggregate break-even from 2015 to 2018, with the 2015–16 season being allowed to have a net loss of a maximum of €30 million, followed by break-even in the 2016–17 season and onwards. Inter was also fined €6 million plus an additional €14 million in probation.",
"title": "Corporate"
},
{
"paragraph_id": 71,
"text": "Inter also made a financial trick in the transfer market in mid-2015, in which Stevan Jovetić and Miranda were signed by Inter on temporary deals plus an obligation to sign outright in 2017, making their cost less in the loan period. Moreover, despite heavily investing in new signings, namely Geoffrey Kondogbia and Ivan Perišić that potentially increased the cost in amortization, Inter also sold Mateo Kovačić for €29 million, making a windfall profit. In November 2018, documents from Football Leaks further revealed that the loan signings such as Xherdan Shaqiri in January 2015, was in fact had inevitable conditions to trigger the outright purchase.",
"title": "Corporate"
},
{
"paragraph_id": 72,
"text": "On 21 April 2017, Inter announced that their net loss (FFP adjusted) of 2015–16 season was within the allowable limit of €30 million. However, on the same day UEFA also announced that the reduction of squad size of Inter in European competitions would not be lifted yet, due to partial fulfilment of the targets in the settlement agreement. Same announcement was made by UEFA in June 2018, based on Inter's 2016–17 season financial result.",
"title": "Corporate"
},
{
"paragraph_id": 73,
"text": "In February 2020, Inter Milan sued Major League Soccer (MLS) for trademark infringement, claiming that the term \"Inter\" is synonymous with its club and no one else.",
"title": "Corporate"
}
]
| Football Club Internazionale Milano, commonly referred to as Internazionale or simply Inter, and colloquially known as Inter Milan in English-speaking countries, is an Italian professional football club based in Milan, Lombardy. Inter is the only Italian side to have always competed in the top flight of Italian football since its debut in 1909. Founded in 1908 following a schism within the Milan Cricket and Football Club, Inter won its first championship in 1910. Since its formation, the club has won 35 domestic trophies, including 19 league titles, 9 Coppa Italia, and 7 Supercoppa Italiana. From 2006 to 2010, the club won five successive league titles, equalling the all-time record at that time. They have won the European Cup/Champions League three times: two back-to-back in 1964 and 1965, and then another in 2010. Their latest win completed an unprecedented Italian seasonal treble, with Inter winning the Coppa Italia and the Scudetto the same year. The club has also won three UEFA Cups, two Intercontinental Cups and one FIFA Club World Cup. Inter's home games are played at the San Siro stadium, which they share with city rivals AC Milan. The stadium is the largest in Italian football with a capacity of 75,817. They have long-standing rivalries with Milan, with whom they contest the Derby della Madonnina, and Juventus, with whom they contest the Derby d'Italia; their rivalry with the former is one of the most followed derbies in football. As of 2019, Inter has the highest home game attendance in Italy and the sixth highest attendance in Europe. Since 2016, the club has been majority-owned by Chinese holding company Suning Holdings Group. Inter is one of the most valuable clubs in Italian and world football. | 2001-11-01T15:43:10Z | 2023-12-28T14:54:27Z | [
"Template:Pp-move",
"Template:Cite press release",
"Template:In lang",
"Template:Use dmy dates",
"Template:Main",
"Template:For",
"Template:Wikinews category",
"Template:Authority control",
"Template:Blockquote",
"Template:Fs mid",
"Template:Portal",
"Template:Cite book",
"Template:Official website",
"Template:Commons category",
"Template:Infobox football club",
"Template:Football squad player",
"Template:Lang-it",
"Template:Reflist",
"Template:Cite web",
"Template:Flagicon",
"Template:Cite news",
"Template:Inter Milan",
"Template:Navboxes",
"Template:Updated",
"Template:Football squad end",
"Template:Football squad mid",
"Template:Short description",
"Template:Redirect",
"Template:Pp-semi-indef",
"Template:As of",
"Template:See also",
"Template:Use British English",
"Template:Frac",
"Template:Fs start",
"Template:Football squad start",
"Template:IPA-it",
"Template:Commons",
"Template:Unbulleted list",
"Template:Webarchive"
]
| https://en.wikipedia.org/wiki/Inter_Milan |
15,120 | Interferon | Interferons (IFNs, /ˌɪntərˈfɪərɒn/) are a group of signaling proteins made and released by host cells in response to the presence of several viruses. In a typical scenario, a virus-infected cell will release interferons causing nearby cells to heighten their anti-viral defenses.
IFNs belong to the large class of proteins known as cytokines, molecules used for communication between cells to trigger the protective defenses of the immune system that help eradicate pathogens. Interferons are named for their ability to "interfere" with viral replication by protecting cells from virus infections. However, virus-encoded genetic elements have the ability to antagonize the IFN response contributing to viral pathogenesis and viral diseases. IFNs also have various other functions: they activate immune cells, such as natural killer cells and macrophages, and they increase host defenses by up-regulating antigen presentation by virtue of increasing the expression of major histocompatibility complex (MHC) antigens. Certain symptoms of infections, such as fever, muscle pain and "flu-like symptoms", are also caused by the production of IFNs and other cytokines.
More than twenty distinct IFN genes and proteins have been identified in animals, including humans. They are typically divided among three classes: Type I IFN, Type II IFN, and Type III IFN. IFNs belonging to all three classes are important for fighting viral infections and for the regulation of the immune system.
Based on the type of receptor through which they signal, human interferons have been classified into three major types.
In general, type I and II interferons are responsible for regulating and activating the immune response. Expression of type I and III IFNs can be induced in virtually all cell types upon recognition of viral components, especially nucleic acids, by cytoplasmic and endosomal receptors, whereas type II interferon is induced by cytokines such as IL-12, and its expression is restricted to immune cells such as T cells and NK cells.
All interferons share several common effects: they are antiviral agents and they modulate functions of the immune system. Administration of Type I IFN has been shown experimentally to inhibit tumor growth in animals, but the beneficial action in human tumors has not been widely documented. A virus-infected cell releases viral particles that can infect nearby cells. However, the infected cell can protect neighboring cells against a potential infection of the virus by releasing interferons. In response to interferon, cells produce large amounts of an enzyme known as protein kinase R (PKR). This enzyme phosphorylates a protein known as eIF-2 in response to new viral infections; the phosphorylated eIF-2 forms an inactive complex with another protein, called eIF2B, to reduce protein synthesis within the cell. Another cellular enzyme, RNAse L—also induced by interferon action—destroys RNA within the cells to further reduce protein synthesis of both viral and host genes. Inhibited protein synthesis impairs both virus replication and infected host cells. In addition, interferons induce production of hundreds of other proteins—known collectively as interferon-stimulated genes (ISGs)—that have roles in combating viruses and other actions produced by interferon. They also limit viral spread by increasing p53 activity, which kills virus-infected cells by promoting apoptosis. The effect of IFN on p53 is also linked to its protective role against certain cancers.
Another function of interferons is to up-regulate major histocompatibility complex molecules, MHC I and MHC II, and increase immunoproteasome activity. All interferons significantly enhance the presentation of MHC I dependent antigens. Interferon gamma (IFN-gamma) also significantly stimulates the MHC II-dependent presentation of antigens. Higher MHC I expression increases presentation of viral and abnormal peptides from cancer cells to cytotoxic T cells, while the immunoproteasome processes these peptides for loading onto the MHC I molecule, thereby increasing the recognition and killing of infected or malignant cells. Higher MHC II expression increases presentation of these peptides to helper T cells; these cells release cytokines (such as more interferons and interleukins, among others) that signal to and co-ordinate the activity of other immune cells.
Interferons can also suppress angiogenesis by down regulation of angiogenic stimuli deriving from tumor cells. They also suppress the proliferation of endothelial cells. Such suppression causes a decrease in tumor angiogenesis, a decrease in its vascularization and subsequent growth inhibition. Interferons, such as interferon gamma, directly activate other immune cells, such as macrophages and natural killer cells.
Production of interferons occurs mainly in response to microbes, such as viruses and bacteria, and their products. Binding of molecules uniquely found in microbes—viral glycoproteins, viral RNA, bacterial endotoxin (lipopolysaccharide), bacterial flagella, CpG motifs—by pattern recognition receptors, such as membrane bound toll like receptors or the cytoplasmic receptors RIG-I or MDA5, can trigger release of IFNs. Toll Like Receptor 3 (TLR3) is important for inducing interferons in response to the presence of double-stranded RNA viruses; the ligand for this receptor is double-stranded RNA (dsRNA). After binding dsRNA, this receptor activates the transcription factors IRF3 and NF-κB, which are important for initiating synthesis of many inflammatory proteins. RNA interference technology tools such as siRNA or vector-based reagents can either silence or stimulate interferon pathways. Release of IFN from cells (specifically IFN-γ in lymphoid cells) is also induced by mitogens. Other cytokines, such as interleukin 1, interleukin 2, interleukin-12, tumor necrosis factor and colony-stimulating factor, can also enhance interferon production.
By interacting with their specific receptors, IFNs activate signal transducer and activator of transcription (STAT) complexes; STATs are a family of transcription factors that regulate the expression of certain immune system genes. Some STATs are activated by both type I and type II IFNs. However each IFN type can also activate unique STATs.
STAT activation initiates the most well-defined cell signaling pathway for all IFNs, the classical Janus kinase-STAT (JAK-STAT) signaling pathway. In this pathway, JAKs associate with IFN receptors and, following receptor engagement with IFN, phosphorylate both STAT1 and STAT2. As a result, an IFN-stimulated gene factor 3 (ISGF3) complex forms—this contains STAT1, STAT2 and a third transcription factor called IRF9—and moves into the cell nucleus. Inside the nucleus, the ISGF3 complex binds to specific nucleotide sequences called IFN-stimulated response elements (ISREs) in the promoters of certain genes, known as IFN stimulated genes ISGs. Binding of ISGF3 and other transcriptional complexes activated by IFN signaling to these specific regulatory elements induces transcription of those genes. A collection of known ISGs is available on Interferome, a curated online database of ISGs (www.interferome.org); Additionally, STAT homodimers or heterodimers form from different combinations of STAT-1, -3, -4, -5, or -6 during IFN signaling; these dimers initiate gene transcription by binding to IFN-activated site (GAS) elements in gene promoters. Type I IFNs can induce expression of genes with either ISRE or GAS elements, but gene induction by type II IFN can occur only in the presence of a GAS element.
In addition to the JAK-STAT pathway, IFNs can activate several other signaling cascades. For instance, both type I and type II IFNs activate a member of the CRK family of adaptor proteins called CRKL, a nuclear adaptor for STAT5 that also regulates signaling through the C3G/Rap1 pathway. Type I IFNs further activate p38 mitogen-activated protein kinase (MAP kinase) to induce gene transcription. Antiviral and antiproliferative effects specific to type I IFNs result from p38 MAP kinase signaling. The phosphatidylinositol 3-kinase (PI3K) signaling pathway is also regulated by both type I and type II IFNs. PI3K activates P70-S6 Kinase 1, an enzyme that increases protein synthesis and cell proliferation; phosphorylates ribosomal protein s6, which is involved in protein synthesis; and phosphorylates a translational repressor protein called eukaryotic translation-initiation factor 4E-binding protein 1 (EIF4EBP1) in order to deactivate it.
Interferons can disrupt signaling by other stimuli. For example, interferon alpha induces RIG-G, which disrupts the CSN5-containing COP9 signalosome (CSN), a highly conserved multiprotein complex implicated in protein deneddylation, deubiquitination, and phosphorylation. RIG-G has shown the capacity to inhibit NF-κB and STAT3 signaling in lung cancer cells, which demonstrates the potential of type I IFNs.
Many viruses have evolved mechanisms to resist interferon activity. They circumvent the IFN response by blocking downstream signaling events that occur after the cytokine binds to its receptor, by preventing further IFN production, and by inhibiting the functions of proteins that are induced by IFN. Viruses that inhibit IFN signaling include Japanese Encephalitis Virus (JEV), dengue type 2 virus (DEN-2), and viruses of the herpesvirus family, such as human cytomegalovirus (HCMV) and Kaposi's sarcoma-associated herpesvirus (KSHV or HHV8). Viral proteins proven to affect IFN signaling include EBV nuclear antigen 1 (EBNA1) and EBV nuclear antigen 2 (EBNA-2) from Epstein-Barr virus, the large T antigen of Polyomavirus, the E7 protein of Human papillomavirus (HPV), and the B18R protein of vaccinia virus. Reducing IFN-α activity may prevent signaling via STAT1, STAT2, or IRF9 (as with JEV infection) or through the JAK-STAT pathway (as with DEN-2 infection). Several poxviruses encode soluble IFN receptor homologs—like the B18R protein of the vaccinia virus—that bind to and prevent IFN interacting with its cellular receptor, impeding communication between this cytokine and its target cells. Some viruses can encode proteins that bind to double-stranded RNA (dsRNA) to prevent the activity of RNA-dependent protein kinases; this is the mechanism reovirus adopts using its sigma 3 (σ3) protein, and vaccinia virus employs using the gene product of its E3L gene, p25. The ability of interferon to induce protein production from interferon stimulated genes (ISGs) can also be affected. Production of protein kinase R, for example, can be disrupted in cells infected with JEV. Some viruses escape the anti-viral activities of interferons by gene (and thus protein) mutation. The H5N1 influenza virus, also known as bird flu, has resistance to interferon and other anti-viral cytokines that is attributed to a single amino acid change in its Non-Structural Protein 1 (NS1), although the precise mechanism of how this confers immunity is unclear. The relative resistance of hepatitis C virus genotype I to interferon-based therapy has been attributed in part to homology between viral envelope protein E2 and host protein kinase R, a mediator of interferon-induced suppression of viral protein translation, although mechanisms of acquired and intrinsic resistance to interferon therapy in HCV are polyfactorial.
Coronaviruses evade innate immunity during the first ten days of viral infection. In the early stages of infection, SARS-CoV-2 induces an even lower interferon type I (IFN-I) response than SARS-CoV, which itself is a weak IFN-I inducer in human cells. SARS-CoV-2 limits the IFN-III response as well. Reduced numbers of plasmacytoid dendritic cells with age is associated with increased COVID-19 severity, possibly because these cells are substantial interferon producers.
Ten percent of patients with life-threatening COVID-19 have autoantibodies against type I interferon.
Delayed IFN-I response contributes to the pathogenic inflammation (cytokine storm) seen in later stages of COVID-19 disease. Application of IFN-I prior to (or in the very early stages of) viral infection can be protective, as can treatment with pegylated IFN-λIII, which should be validated in randomized clinical trials.
Interferon beta-1a and interferon beta-1b are used to treat and control multiple sclerosis, an autoimmune disorder. This treatment may help in reducing attacks in relapsing-remitting multiple sclerosis and slowing disease progression and activity in secondary progressive multiple sclerosis.
Interferon therapy is used (in combination with chemotherapy and radiation) as a treatment for some cancers. This treatment can be used in hematological malignancy, such as in leukemia and lymphomas including hairy cell leukemia, chronic myeloid leukemia, nodular lymphoma, and cutaneous T-cell lymphoma. Patients with recurrent melanomas receive recombinant IFN-α2b.
Both hepatitis B and hepatitis C can be treated with IFN-α, often in combination with other antiviral drugs. Some of those treated with interferon have a sustained virological response and can eliminate hepatitis virus in the case of hepatitis C. The most common strain of hepatitis C virus (HCV) worldwide—genotype I— can be treated with interferon-α, ribavirin and protease inhibitors such as telaprevir, boceprevir or the nucleotide analog polymerase inhibitor sofosbuvir. Biopsies of patients given the treatment show reductions in liver damage and cirrhosis. Control of chronic hepatitis C by IFN is associated with reduced hepatocellular carcinoma. A single nucleotide polymorphism (SNP) in the gene encoding the type III interferon IFN-λ3 was found to be protective against chronic infection following proven HCV infection and predicted treatment response to interferon-based regimens. The frequency of the SNP differed significantly by race, partly explaining observed differences in response to interferon therapy between European-Americans and African-Americans.
Unconfirmed results suggested that interferon eye drops may be an effective treatment for people who have herpes simplex virus epithelial keratitis, a type of eye infection. There is no clear evidence to suggest that removing the infected tissue (debridement) followed by interferon drops is an effective treatment approach for these types of eye infections. Unconfirmed results suggested that the combination of interferon and an antiviral agent may speed the healing process compared to antiviral therapy alone.
When used in systemic therapy, IFNs are mostly administered by an intramuscular injection. The injection of IFNs in the muscle or under the skin is generally well tolerated. The most frequent adverse effects are flu-like symptoms: increased body temperature, feeling ill, fatigue, headache, muscle pain, convulsion, dizziness, hair thinning, and depression. Erythema, pain, and hardness at the site of injection are also frequently observed. IFN therapy causes immunosuppression, in particular through neutropenia and can result in some infections manifesting in unusual ways.
Several different types of interferons are approved for use in humans. One was first approved for medical use in 1986. For example, in January 2001, the Food and Drug Administration (FDA) approved the use of PEGylated interferon-alpha in the USA; in this formulation, PEGylated interferon-alpha-2b (Pegintron), polyethylene glycol is linked to the interferon molecule to make the interferon last longer in the body. Approval for PEGylated interferon-alpha-2a (Pegasys) followed in October 2002. These PEGylated drugs are injected once weekly, rather than administering two or three times per week, as is necessary for conventional interferon-alpha. When used with the antiviral drug ribavirin, PEGylated interferon is effective in treatment of hepatitis C; at least 75% of people with hepatitis C genotypes 2 or 3 benefit from interferon treatment, although this is effective in less than 50% of people infected with genotype 1 (the more common form of hepatitis C virus in both the U.S. and Western Europe). Interferon-containing regimens may also include protease inhibitors such as boceprevir and telaprevir.
There are also interferon-inducing drugs, notably tilorone that is shown to be effective against Ebola virus.
Interferons were first described in 1957 by Alick Isaacs and Jean Lindenmann at the National Institute for Medical Research in London; the discovery was a result of their studies of viral interference. Viral interference refers to the inhibition of virus growth caused by previous exposure of cells to an active or a heat-inactivated virus. Isaacs and Lindenmann were working with a system that involved the inhibition of the growth of live influenza virus in chicken embryo chorioallantoic membranes by heat-inactivated influenza virus. Their experiments revealed that this interference was mediated by a protein released by cells in the heat-inactivated influenza virus-treated membranes. They published their results in 1957 naming the antiviral factor they had discovered interferon. The findings of Isaacs and Lindenmann have been widely confirmed and corroborated in the literature.
Furthermore, others may have made observations on interferons before the 1957 publication of Isaacs and Lindenmann. For example, during research to produce a more efficient vaccine for smallpox, Yasu-ichi Nagano and Yasuhiko Kojima—two Japanese virologists working at the Institute for Infectious Diseases at the University of Tokyo—noticed inhibition of viral growth in an area of rabbit-skin or testis previously inoculated with UV-inactivated virus. They hypothesised that some "viral inhibitory factor" was present in the tissues infected with virus and attempted to isolate and characterize this factor from tissue homogenates. Independently, Monto Ho, in John Enders's lab, observed in 1957 that attenuated poliovirus conferred a species specific anti-viral effect in human amniotic cell cultures. They described these observations in a 1959 publication, naming the responsible factor viral inhibitory factor (VIF). It took another fifteen to twenty years, using somatic cell genetics, to show that the interferon action gene and interferon gene reside in different human chromosomes. The purification of human beta interferon did not occur until 1977. Y.H. Tan and his co-workers purified and produced biologically active, radio-labeled human beta interferon by superinducing the interferon gene in fibroblast cells, and they showed its active site contains tyrosine residues. Tan's laboratory isolated sufficient amounts of human beta interferon to perform the first amino acid, sugar composition and N-terminal analyses. They showed that human beta interferon was an unusually hydrophobic glycoprotein. This explained the large loss of interferon activity when preparations were transferred from test tube to test tube or from vessel to vessel during purification. The analyses showed the reality of interferon activity by chemical verification. The purification of human alpha interferon was not reported until 1978. A series of publications from the laboratories of Sidney Pestka and Alan Waldman between 1978 and 1981, describe the purification of the type I interferons IFN-α and IFN-β. By the early 1980s, genes for these interferons had been cloned, adding further definitive proof that interferons were responsible for interfering with viral replication. Gene cloning also confirmed that IFN-α was encoded by a family of many related genes. The type II IFN (IFN-γ) gene was also isolated around this time.
Interferon was first synthesized manually at Rockefeller University in the lab of Dr. Bruce Merrifield, using solid phase peptide synthesis, one amino acid at a time. He later won the Nobel Prize in chemistry. Interferon was scarce and expensive until 1980, when the interferon gene was inserted into bacteria using recombinant DNA technology, allowing mass cultivation and purification from bacterial cultures or derived from yeasts. Interferon can also be produced by recombinant mammalian cells. Before the early 1970s, large scale production of human interferon had been pioneered by Kari Cantell. He produced large amounts of human alpha interferon from large quantities of human white blood cells collected by the Finnish Blood Bank. Large amounts of human beta interferon were made by superinducing the beta interferon gene in human fibroblast cells.
Cantell's and Tan's methods of making large amounts of natural interferon were critical for chemical characterisation, clinical trials and the preparation of small amounts of interferon messenger RNA to clone the human alpha and beta interferon genes. The superinduced human beta interferon messenger RNA was prepared by Tan's lab for Cetus corp. to clone the human beta interferon gene in bacteria and the recombinant interferon was developed as 'betaseron' and approved for the treatment of MS. Superinduction of the human beta interferon gene was also used by Israeli scientists to manufacture human beta interferon. | [
{
"paragraph_id": 0,
"text": "Interferons (IFNs, /ˌɪntərˈfɪərɒn/) are a group of signaling proteins made and released by host cells in response to the presence of several viruses. In a typical scenario, a virus-infected cell will release interferons causing nearby cells to heighten their anti-viral defenses.",
"title": ""
},
{
"paragraph_id": 1,
"text": "IFNs belong to the large class of proteins known as cytokines, molecules used for communication between cells to trigger the protective defenses of the immune system that help eradicate pathogens. Interferons are named for their ability to \"interfere\" with viral replication by protecting cells from virus infections. However, virus-encoded genetic elements have the ability to antagonize the IFN response contributing to viral pathogenesis and viral diseases. IFNs also have various other functions: they activate immune cells, such as natural killer cells and macrophages, and they increase host defenses by up-regulating antigen presentation by virtue of increasing the expression of major histocompatibility complex (MHC) antigens. Certain symptoms of infections, such as fever, muscle pain and \"flu-like symptoms\", are also caused by the production of IFNs and other cytokines.",
"title": ""
},
{
"paragraph_id": 2,
"text": "More than twenty distinct IFN genes and proteins have been identified in animals, including humans. They are typically divided among three classes: Type I IFN, Type II IFN, and Type III IFN. IFNs belonging to all three classes are important for fighting viral infections and for the regulation of the immune system.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Based on the type of receptor through which they signal, human interferons have been classified into three major types.",
"title": "Types of interferon"
},
{
"paragraph_id": 4,
"text": "In general, type I and II interferons are responsible for regulating and activating the immune response. Expression of type I and III IFNs can be induced in virtually all cell types upon recognition of viral components, especially nucleic acids, by cytoplasmic and endosomal receptors, whereas type II interferon is induced by cytokines such as IL-12, and its expression is restricted to immune cells such as T cells and NK cells.",
"title": "Types of interferon"
},
{
"paragraph_id": 5,
"text": "All interferons share several common effects: they are antiviral agents and they modulate functions of the immune system. Administration of Type I IFN has been shown experimentally to inhibit tumor growth in animals, but the beneficial action in human tumors has not been widely documented. A virus-infected cell releases viral particles that can infect nearby cells. However, the infected cell can protect neighboring cells against a potential infection of the virus by releasing interferons. In response to interferon, cells produce large amounts of an enzyme known as protein kinase R (PKR). This enzyme phosphorylates a protein known as eIF-2 in response to new viral infections; the phosphorylated eIF-2 forms an inactive complex with another protein, called eIF2B, to reduce protein synthesis within the cell. Another cellular enzyme, RNAse L—also induced by interferon action—destroys RNA within the cells to further reduce protein synthesis of both viral and host genes. Inhibited protein synthesis impairs both virus replication and infected host cells. In addition, interferons induce production of hundreds of other proteins—known collectively as interferon-stimulated genes (ISGs)—that have roles in combating viruses and other actions produced by interferon. They also limit viral spread by increasing p53 activity, which kills virus-infected cells by promoting apoptosis. The effect of IFN on p53 is also linked to its protective role against certain cancers.",
"title": "Function"
},
{
"paragraph_id": 6,
"text": "Another function of interferons is to up-regulate major histocompatibility complex molecules, MHC I and MHC II, and increase immunoproteasome activity. All interferons significantly enhance the presentation of MHC I dependent antigens. Interferon gamma (IFN-gamma) also significantly stimulates the MHC II-dependent presentation of antigens. Higher MHC I expression increases presentation of viral and abnormal peptides from cancer cells to cytotoxic T cells, while the immunoproteasome processes these peptides for loading onto the MHC I molecule, thereby increasing the recognition and killing of infected or malignant cells. Higher MHC II expression increases presentation of these peptides to helper T cells; these cells release cytokines (such as more interferons and interleukins, among others) that signal to and co-ordinate the activity of other immune cells.",
"title": "Function"
},
{
"paragraph_id": 7,
"text": "Interferons can also suppress angiogenesis by down regulation of angiogenic stimuli deriving from tumor cells. They also suppress the proliferation of endothelial cells. Such suppression causes a decrease in tumor angiogenesis, a decrease in its vascularization and subsequent growth inhibition. Interferons, such as interferon gamma, directly activate other immune cells, such as macrophages and natural killer cells.",
"title": "Function"
},
{
"paragraph_id": 8,
"text": "Production of interferons occurs mainly in response to microbes, such as viruses and bacteria, and their products. Binding of molecules uniquely found in microbes—viral glycoproteins, viral RNA, bacterial endotoxin (lipopolysaccharide), bacterial flagella, CpG motifs—by pattern recognition receptors, such as membrane bound toll like receptors or the cytoplasmic receptors RIG-I or MDA5, can trigger release of IFNs. Toll Like Receptor 3 (TLR3) is important for inducing interferons in response to the presence of double-stranded RNA viruses; the ligand for this receptor is double-stranded RNA (dsRNA). After binding dsRNA, this receptor activates the transcription factors IRF3 and NF-κB, which are important for initiating synthesis of many inflammatory proteins. RNA interference technology tools such as siRNA or vector-based reagents can either silence or stimulate interferon pathways. Release of IFN from cells (specifically IFN-γ in lymphoid cells) is also induced by mitogens. Other cytokines, such as interleukin 1, interleukin 2, interleukin-12, tumor necrosis factor and colony-stimulating factor, can also enhance interferon production.",
"title": "Induction of interferons"
},
{
"paragraph_id": 9,
"text": "By interacting with their specific receptors, IFNs activate signal transducer and activator of transcription (STAT) complexes; STATs are a family of transcription factors that regulate the expression of certain immune system genes. Some STATs are activated by both type I and type II IFNs. However each IFN type can also activate unique STATs.",
"title": "Downstream signaling"
},
{
"paragraph_id": 10,
"text": "STAT activation initiates the most well-defined cell signaling pathway for all IFNs, the classical Janus kinase-STAT (JAK-STAT) signaling pathway. In this pathway, JAKs associate with IFN receptors and, following receptor engagement with IFN, phosphorylate both STAT1 and STAT2. As a result, an IFN-stimulated gene factor 3 (ISGF3) complex forms—this contains STAT1, STAT2 and a third transcription factor called IRF9—and moves into the cell nucleus. Inside the nucleus, the ISGF3 complex binds to specific nucleotide sequences called IFN-stimulated response elements (ISREs) in the promoters of certain genes, known as IFN stimulated genes ISGs. Binding of ISGF3 and other transcriptional complexes activated by IFN signaling to these specific regulatory elements induces transcription of those genes. A collection of known ISGs is available on Interferome, a curated online database of ISGs (www.interferome.org); Additionally, STAT homodimers or heterodimers form from different combinations of STAT-1, -3, -4, -5, or -6 during IFN signaling; these dimers initiate gene transcription by binding to IFN-activated site (GAS) elements in gene promoters. Type I IFNs can induce expression of genes with either ISRE or GAS elements, but gene induction by type II IFN can occur only in the presence of a GAS element.",
"title": "Downstream signaling"
},
{
"paragraph_id": 11,
"text": "In addition to the JAK-STAT pathway, IFNs can activate several other signaling cascades. For instance, both type I and type II IFNs activate a member of the CRK family of adaptor proteins called CRKL, a nuclear adaptor for STAT5 that also regulates signaling through the C3G/Rap1 pathway. Type I IFNs further activate p38 mitogen-activated protein kinase (MAP kinase) to induce gene transcription. Antiviral and antiproliferative effects specific to type I IFNs result from p38 MAP kinase signaling. The phosphatidylinositol 3-kinase (PI3K) signaling pathway is also regulated by both type I and type II IFNs. PI3K activates P70-S6 Kinase 1, an enzyme that increases protein synthesis and cell proliferation; phosphorylates ribosomal protein s6, which is involved in protein synthesis; and phosphorylates a translational repressor protein called eukaryotic translation-initiation factor 4E-binding protein 1 (EIF4EBP1) in order to deactivate it.",
"title": "Downstream signaling"
},
{
"paragraph_id": 12,
"text": "Interferons can disrupt signaling by other stimuli. For example, interferon alpha induces RIG-G, which disrupts the CSN5-containing COP9 signalosome (CSN), a highly conserved multiprotein complex implicated in protein deneddylation, deubiquitination, and phosphorylation. RIG-G has shown the capacity to inhibit NF-κB and STAT3 signaling in lung cancer cells, which demonstrates the potential of type I IFNs.",
"title": "Downstream signaling"
},
{
"paragraph_id": 13,
"text": "Many viruses have evolved mechanisms to resist interferon activity. They circumvent the IFN response by blocking downstream signaling events that occur after the cytokine binds to its receptor, by preventing further IFN production, and by inhibiting the functions of proteins that are induced by IFN. Viruses that inhibit IFN signaling include Japanese Encephalitis Virus (JEV), dengue type 2 virus (DEN-2), and viruses of the herpesvirus family, such as human cytomegalovirus (HCMV) and Kaposi's sarcoma-associated herpesvirus (KSHV or HHV8). Viral proteins proven to affect IFN signaling include EBV nuclear antigen 1 (EBNA1) and EBV nuclear antigen 2 (EBNA-2) from Epstein-Barr virus, the large T antigen of Polyomavirus, the E7 protein of Human papillomavirus (HPV), and the B18R protein of vaccinia virus. Reducing IFN-α activity may prevent signaling via STAT1, STAT2, or IRF9 (as with JEV infection) or through the JAK-STAT pathway (as with DEN-2 infection). Several poxviruses encode soluble IFN receptor homologs—like the B18R protein of the vaccinia virus—that bind to and prevent IFN interacting with its cellular receptor, impeding communication between this cytokine and its target cells. Some viruses can encode proteins that bind to double-stranded RNA (dsRNA) to prevent the activity of RNA-dependent protein kinases; this is the mechanism reovirus adopts using its sigma 3 (σ3) protein, and vaccinia virus employs using the gene product of its E3L gene, p25. The ability of interferon to induce protein production from interferon stimulated genes (ISGs) can also be affected. Production of protein kinase R, for example, can be disrupted in cells infected with JEV. Some viruses escape the anti-viral activities of interferons by gene (and thus protein) mutation. The H5N1 influenza virus, also known as bird flu, has resistance to interferon and other anti-viral cytokines that is attributed to a single amino acid change in its Non-Structural Protein 1 (NS1), although the precise mechanism of how this confers immunity is unclear. The relative resistance of hepatitis C virus genotype I to interferon-based therapy has been attributed in part to homology between viral envelope protein E2 and host protein kinase R, a mediator of interferon-induced suppression of viral protein translation, although mechanisms of acquired and intrinsic resistance to interferon therapy in HCV are polyfactorial.",
"title": "Viral resistance to interferons"
},
{
"paragraph_id": 14,
"text": "Coronaviruses evade innate immunity during the first ten days of viral infection. In the early stages of infection, SARS-CoV-2 induces an even lower interferon type I (IFN-I) response than SARS-CoV, which itself is a weak IFN-I inducer in human cells. SARS-CoV-2 limits the IFN-III response as well. Reduced numbers of plasmacytoid dendritic cells with age is associated with increased COVID-19 severity, possibly because these cells are substantial interferon producers.",
"title": "Coronavirus response"
},
{
"paragraph_id": 15,
"text": "Ten percent of patients with life-threatening COVID-19 have autoantibodies against type I interferon.",
"title": "Coronavirus response"
},
{
"paragraph_id": 16,
"text": "Delayed IFN-I response contributes to the pathogenic inflammation (cytokine storm) seen in later stages of COVID-19 disease. Application of IFN-I prior to (or in the very early stages of) viral infection can be protective, as can treatment with pegylated IFN-λIII, which should be validated in randomized clinical trials.",
"title": "Coronavirus response"
},
{
"paragraph_id": 17,
"text": "Interferon beta-1a and interferon beta-1b are used to treat and control multiple sclerosis, an autoimmune disorder. This treatment may help in reducing attacks in relapsing-remitting multiple sclerosis and slowing disease progression and activity in secondary progressive multiple sclerosis.",
"title": "Interferon therapy"
},
{
"paragraph_id": 18,
"text": "Interferon therapy is used (in combination with chemotherapy and radiation) as a treatment for some cancers. This treatment can be used in hematological malignancy, such as in leukemia and lymphomas including hairy cell leukemia, chronic myeloid leukemia, nodular lymphoma, and cutaneous T-cell lymphoma. Patients with recurrent melanomas receive recombinant IFN-α2b.",
"title": "Interferon therapy"
},
{
"paragraph_id": 19,
"text": "Both hepatitis B and hepatitis C can be treated with IFN-α, often in combination with other antiviral drugs. Some of those treated with interferon have a sustained virological response and can eliminate hepatitis virus in the case of hepatitis C. The most common strain of hepatitis C virus (HCV) worldwide—genotype I— can be treated with interferon-α, ribavirin and protease inhibitors such as telaprevir, boceprevir or the nucleotide analog polymerase inhibitor sofosbuvir. Biopsies of patients given the treatment show reductions in liver damage and cirrhosis. Control of chronic hepatitis C by IFN is associated with reduced hepatocellular carcinoma. A single nucleotide polymorphism (SNP) in the gene encoding the type III interferon IFN-λ3 was found to be protective against chronic infection following proven HCV infection and predicted treatment response to interferon-based regimens. The frequency of the SNP differed significantly by race, partly explaining observed differences in response to interferon therapy between European-Americans and African-Americans.",
"title": "Interferon therapy"
},
{
"paragraph_id": 20,
"text": "Unconfirmed results suggested that interferon eye drops may be an effective treatment for people who have herpes simplex virus epithelial keratitis, a type of eye infection. There is no clear evidence to suggest that removing the infected tissue (debridement) followed by interferon drops is an effective treatment approach for these types of eye infections. Unconfirmed results suggested that the combination of interferon and an antiviral agent may speed the healing process compared to antiviral therapy alone.",
"title": "Interferon therapy"
},
{
"paragraph_id": 21,
"text": "When used in systemic therapy, IFNs are mostly administered by an intramuscular injection. The injection of IFNs in the muscle or under the skin is generally well tolerated. The most frequent adverse effects are flu-like symptoms: increased body temperature, feeling ill, fatigue, headache, muscle pain, convulsion, dizziness, hair thinning, and depression. Erythema, pain, and hardness at the site of injection are also frequently observed. IFN therapy causes immunosuppression, in particular through neutropenia and can result in some infections manifesting in unusual ways.",
"title": "Interferon therapy"
},
{
"paragraph_id": 22,
"text": "Several different types of interferons are approved for use in humans. One was first approved for medical use in 1986. For example, in January 2001, the Food and Drug Administration (FDA) approved the use of PEGylated interferon-alpha in the USA; in this formulation, PEGylated interferon-alpha-2b (Pegintron), polyethylene glycol is linked to the interferon molecule to make the interferon last longer in the body. Approval for PEGylated interferon-alpha-2a (Pegasys) followed in October 2002. These PEGylated drugs are injected once weekly, rather than administering two or three times per week, as is necessary for conventional interferon-alpha. When used with the antiviral drug ribavirin, PEGylated interferon is effective in treatment of hepatitis C; at least 75% of people with hepatitis C genotypes 2 or 3 benefit from interferon treatment, although this is effective in less than 50% of people infected with genotype 1 (the more common form of hepatitis C virus in both the U.S. and Western Europe). Interferon-containing regimens may also include protease inhibitors such as boceprevir and telaprevir.",
"title": "Interferon therapy"
},
{
"paragraph_id": 23,
"text": "There are also interferon-inducing drugs, notably tilorone that is shown to be effective against Ebola virus.",
"title": "Interferon therapy"
},
{
"paragraph_id": 24,
"text": "Interferons were first described in 1957 by Alick Isaacs and Jean Lindenmann at the National Institute for Medical Research in London; the discovery was a result of their studies of viral interference. Viral interference refers to the inhibition of virus growth caused by previous exposure of cells to an active or a heat-inactivated virus. Isaacs and Lindenmann were working with a system that involved the inhibition of the growth of live influenza virus in chicken embryo chorioallantoic membranes by heat-inactivated influenza virus. Their experiments revealed that this interference was mediated by a protein released by cells in the heat-inactivated influenza virus-treated membranes. They published their results in 1957 naming the antiviral factor they had discovered interferon. The findings of Isaacs and Lindenmann have been widely confirmed and corroborated in the literature.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Furthermore, others may have made observations on interferons before the 1957 publication of Isaacs and Lindenmann. For example, during research to produce a more efficient vaccine for smallpox, Yasu-ichi Nagano and Yasuhiko Kojima—two Japanese virologists working at the Institute for Infectious Diseases at the University of Tokyo—noticed inhibition of viral growth in an area of rabbit-skin or testis previously inoculated with UV-inactivated virus. They hypothesised that some \"viral inhibitory factor\" was present in the tissues infected with virus and attempted to isolate and characterize this factor from tissue homogenates. Independently, Monto Ho, in John Enders's lab, observed in 1957 that attenuated poliovirus conferred a species specific anti-viral effect in human amniotic cell cultures. They described these observations in a 1959 publication, naming the responsible factor viral inhibitory factor (VIF). It took another fifteen to twenty years, using somatic cell genetics, to show that the interferon action gene and interferon gene reside in different human chromosomes. The purification of human beta interferon did not occur until 1977. Y.H. Tan and his co-workers purified and produced biologically active, radio-labeled human beta interferon by superinducing the interferon gene in fibroblast cells, and they showed its active site contains tyrosine residues. Tan's laboratory isolated sufficient amounts of human beta interferon to perform the first amino acid, sugar composition and N-terminal analyses. They showed that human beta interferon was an unusually hydrophobic glycoprotein. This explained the large loss of interferon activity when preparations were transferred from test tube to test tube or from vessel to vessel during purification. The analyses showed the reality of interferon activity by chemical verification. The purification of human alpha interferon was not reported until 1978. A series of publications from the laboratories of Sidney Pestka and Alan Waldman between 1978 and 1981, describe the purification of the type I interferons IFN-α and IFN-β. By the early 1980s, genes for these interferons had been cloned, adding further definitive proof that interferons were responsible for interfering with viral replication. Gene cloning also confirmed that IFN-α was encoded by a family of many related genes. The type II IFN (IFN-γ) gene was also isolated around this time.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Interferon was first synthesized manually at Rockefeller University in the lab of Dr. Bruce Merrifield, using solid phase peptide synthesis, one amino acid at a time. He later won the Nobel Prize in chemistry. Interferon was scarce and expensive until 1980, when the interferon gene was inserted into bacteria using recombinant DNA technology, allowing mass cultivation and purification from bacterial cultures or derived from yeasts. Interferon can also be produced by recombinant mammalian cells. Before the early 1970s, large scale production of human interferon had been pioneered by Kari Cantell. He produced large amounts of human alpha interferon from large quantities of human white blood cells collected by the Finnish Blood Bank. Large amounts of human beta interferon were made by superinducing the beta interferon gene in human fibroblast cells.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Cantell's and Tan's methods of making large amounts of natural interferon were critical for chemical characterisation, clinical trials and the preparation of small amounts of interferon messenger RNA to clone the human alpha and beta interferon genes. The superinduced human beta interferon messenger RNA was prepared by Tan's lab for Cetus corp. to clone the human beta interferon gene in bacteria and the recombinant interferon was developed as 'betaseron' and approved for the treatment of MS. Superinduction of the human beta interferon gene was also used by Israeli scientists to manufacture human beta interferon.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "",
"title": "Human interferons"
},
{
"paragraph_id": 29,
"text": "",
"title": "Teleost fish interferons"
}
]
| Interferons are a group of signaling proteins made and released by host cells in response to the presence of several viruses. In a typical scenario, a virus-infected cell will release interferons causing nearby cells to heighten their anti-viral defenses. IFNs belong to the large class of proteins known as cytokines, molecules used for communication between cells to trigger the protective defenses of the immune system that help eradicate pathogens. Interferons are named for their ability to "interfere" with viral replication by protecting cells from virus infections. However, virus-encoded genetic elements have the ability to antagonize the IFN response contributing to viral pathogenesis and viral diseases. IFNs also have various other functions: they activate immune cells, such as natural killer cells and macrophages, and they increase host defenses by up-regulating antigen presentation by virtue of increasing the expression of major histocompatibility complex (MHC) antigens. Certain symptoms of infections, such as fever, muscle pain and "flu-like symptoms", are also caused by the production of IFNs and other cytokines. More than twenty distinct IFN genes and proteins have been identified in animals, including humans. They are typically divided among three classes: Type I IFN, Type II IFN, and Type III IFN. IFNs belonging to all three classes are important for fighting viral infections and for the regulation of the immune system. | 2001-10-06T00:49:22Z | 2023-11-20T09:16:02Z | [
"Template:Redirect",
"Template:Authority control",
"Template:Cite patent",
"Template:Cytokine receptor modulators",
"Template:Portal bar",
"Template:Primary sources section",
"Template:Columns-list",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite news",
"Template:Cite web",
"Template:Commons category-inline",
"Template:Cytokines",
"Template:Cite book",
"Template:Antivirals",
"Template:Short description",
"Template:Pfam box",
"Template:IPAc-en",
"Template:Citation needed",
"Template:More citations needed section"
]
| https://en.wikipedia.org/wiki/Interferon |
15,123 | Israeli settlement | Israeli settlements or colonies are civilian communities where Israeli citizens live, almost exclusively of Jewish identity or ethnicity, built on lands occupied by Israel since the Six-Day War in 1967. The international community considers Israeli settlements to be illegal under international law, though Israel disputes this.
Israeli settlements currently exist in the West Bank (including East Jerusalem), claimed by the State of Palestine as its sovereign territory, and in the Golan Heights, which is internationally considered Syrian territory. East Jerusalem and the Golan Heights have been effectively annexed by Israel, though the international community has rejected any change of status and considers each occupied territory. Although the West Bank settlements are on land administered under Israeli military rule rather than civil law, Israeli civil law is "pipelined" into the settlements, such that Israeli citizens living there are treated similarly to those living in Israel. In the West Bank, Israel continues to expand its remaining settlements as well as settling new areas, despite pressure from the international community to desist. The international community regards both territories as held under Israeli occupation and the localities established there to be illegal settlements. The International Court of Justice found the settlements to be illegal in its 2004 advisory opinion on the West Bank barrier.
As of January 2023, there are 144 Israeli settlements in the West Bank, including 12 in East Jerusalem. There are over 100 Israeli illegal outposts in the West Bank. In total, over 450,000 Israeli settlers live in the West Bank excluding East Jerusalem, with an additional 220,000 Jewish settlers residing in East Jerusalem. Additionally, over 25,000 Israeli settlers live in the Golan Heights. Israeli settlements had previously been built within the Egyptian territory of the Sinai Peninsula, and within the Palestinian territory of the Gaza Strip; however, Israel evacuated and dismantled the 18 Sinai settlements following the 1979 Egypt–Israel peace agreement and all of the 21 settlements in the Gaza Strip, along with four in the West Bank, in 2005 as part of its unilateral disengagement from Gaza.
The transfer by an occupying power of its civilian population into the territory it occupies is a war crime, although Israel disputes that this applies to the West Bank. On 20 December 2019, the International Criminal Court announced an International Criminal Court investigation in Palestine into alleged war crimes. The presence and ongoing expansion of existing settlements by Israel and the construction of settlement outposts is frequently criticized as an obstacle to the Israeli–Palestinian peace process by the Palestinians, and third parties such as the OIC, the United Nations, Russia, the United Kingdom, France, and the European Union have echoed those criticisms. The international community considers the settlements to be illegal under international law, and the United Nations has repeatedly upheld the view that Israel's construction of settlements constitutes a violation of the Fourth Geneva Convention. The United States for decades considered the settlements to be "illegitimate", until the Trump administration in November 2019 shifted its position, declaring "the establishment of Israeli civilian settlements in the West Bank is not per se inconsistent with international law."
Certain observers and Palestinians occasionally use the term "Israeli colonies" as a substitute for the term "settlements". Settlements range in character from farming communities and frontier villages to urban suburbs and neighborhoods. The four largest settlements, Modi'in Illit, Ma'ale Adumim, Beitar Illit and Ariel, have achieved city status. Ariel has 18,000 residents, while the rest have around 37,000 to 55,500 each.
Settlement has an economic dimension, much of it driven by the significantly lower costs of housing for Israeli citizens living in Israeli settlements compared to the cost of housing and living in Israel proper. Government spending per citizen in the settlements is double that spent per Israeli citizen in Tel Aviv and Jerusalem, while government spending for settlers in isolated Israeli settlements is three times the Israeli national average. Most of the spending goes to the security of the Israeli citizens living there.
As of January 2023, there are 144 Israeli settlements in the West Bank, including 12 in East Jerusalem. In addition, there are over 100 Israeli illegal outposts in the West Bank. In total, over 500,000 Israeli settlers live in the West Bank excluding East Jerusalem, with an additional 220,000 Jewish settlers residing in East Jerusalem.
Additionally, over 20,000 Israeli citizens live in settlements in the Golan Heights.
Following the 1967 Six-Day War, Israel occupied a number of territories. It took over the remainder of the Palestinian Mandate territories of the West Bank including East Jerusalem, from Jordan which had controlled the territories since the 1948 Arab-Israeli war, and the Gaza Strip from Egypt, which had held Gaza under occupation since 1949. From Egypt, it also captured the Sinai Peninsula and from Syria it captured most of the Golan Heights, which since 1981 has been administered under the Golan Heights Law.
As early as September 1967, Israeli settlement policy was progressively encouraged by the Labor government of Levi Eshkol. The basis for Israeli settlement in the West Bank became the Allon Plan, named after its inventor Yigal Allon. It implied Israeli annexation of major parts of the Israeli-occupied territories, especially East Jerusalem, Gush Etzion and the Jordan Valley. The settlement policy of the government of Yitzhak Rabin was also derived from the Allon Plan.
The first settlement was Kfar Etzion, in the southern West Bank, although that location was outside the Allon Plan. Many settlements began as Nahal settlements. They were established as military outposts and later expanded and populated with civilian inhabitants. According to a secret document dating to 1970, obtained by Haaretz, the settlement of Kiryat Arba was established by confiscating land by military order and falsely representing the project as being strictly for military use while in reality, Kiryat Arba was planned for settler use. The method of confiscating land by military order for establishing civilian settlements was an open secret in Israel throughout the 1970s, but publication of the information was suppressed by the military censor.
In the 1970s, Israel's methods for seizing Palestinian land to establish settlements included requisitioning for ostensibly military purposes and spraying of land with poison.
The Likud government of Menahem Begin, from 1977, was more supportive to settlement in other parts of the West Bank, by organizations like Gush Emunim and the Jewish Agency/World Zionist Organization, and intensified the settlement activities. In a government statement, Likud declared that the entire historic Land of Israel is the inalienable heritage of the Jewish people and that no part of the West Bank should be handed over to foreign rule. Ariel Sharon declared in the same year (1977) that there was a plan to settle 2 million Jews in the West Bank by 2000. The government abrogated the prohibition from purchasing occupied land by Israelis; the "Drobles Plan", a plan for large-scale settlement in the West Bank meant to prevent a Palestinian state under the pretext of security became the framework for its policy. The "Drobles Plan" from the World Zionist Organization, dated October 1978 and named "Master Plan for the Development of Settlements in Judea and Samaria, 1979–1983", was written by the Jewish Agency director and former Knesset member Matityahu Drobles. In January 1981, the government adopted a follow-up plan from Drobles, dated September 1980 and named "The current state of the settlements in Judea and Samaria", with more details about settlement strategy and policy.
Since 1967, government-funded settlement projects in the West Bank are implemented by the "Settlement Division" of the World Zionist Organization. Though formally a non-governmental organization, it is funded by the Israeli government and leases lands from the Civil Administration to settle in the West Bank. It is authorized to create settlements in the West Bank on lands licensed to it by the Civil Administration. Traditionally, the Settlement Division has been under the responsibility of the Agriculture Ministry. Since the Oslo Accords, it was always housed within the Prime Minister's Office (PMO). In 2007, it was moved back to the Agriculture Ministry. In 2009, the Netanyahu Government decided to subject all settlement activities to additional approval of the Prime Minister and the Defense Minister. In 2011, Netanyahu sought to move the Settlement Division again under the direct control of (his own) PMO, and to curtail Defense Minister Ehud Barak's authority.
At the presentation of the Oslo II Accord on 5 October 1995 in the Knesset, PM Yitzhak Rabin expounded the Israeli settlement policy in connection with the permanent solution to the conflict. Israel wanted "a Palestinian entity, less than a state, which will be a home to most of the Palestinian residents living in the Gaza Strip and the West Bank". It wanted to keep settlements beyond the Green Line including Ma'ale Adumim and Givat Ze'ev in East Jerusalem. Blocs of settlements should be established in the West Bank. Rabin promised not to return to the 4 June 1967 lines.
In June 1997, the Likud government of Benjamin Netanyahu presented its "Allon Plus Plan". This plan holds the retention of some 60% of the West Bank, including the "Greater Jerusalem" area with the settlements Gush Etzion and Ma'aleh Adumim, other large concentrations of settlements in the West Bank, the entire Jordan Valley, a "security area", and a network of Israeli-only bypass roads.
In the Road map for peace of 2002, which was never implemented, the establishment of a Palestinian state was acknowledged. Outposts would be dismantled. However, many new outposts appeared instead, few were removed. Israel's settlement policy remained unchanged. Settlements in East Jerusalem and remaining West Bank were expanded.
While according to official Israeli policy no new settlements were built, at least some hundred unauthorized outposts were established since 2002 with state funding in the 60% of the West Bank that was not under Palestinian administrative control and the population growth of settlers did not diminish.
In 2005, all 21 settlements in the Gaza Strip and four in the northern West Bank were forcibly evacuated as part of Israeli disengagement from the Gaza Strip, known to some in Israel as "the Expulsion". However, the disengagement was more than compensated by transfers to the West Bank.
After the failure of the Roadmap, several new plans emerged to settle in major parts of the West Bank. In 2011, Haaretz revealed the Civil Administration's "Blue Line"-plan, written in January 2011, which aims to increase Israeli "state-ownership" of West Bank land ("state lands") and settlement in strategic areas like the Jordan Valley and the northern Dead Sea area. In March 2012, it was revealed that the Civil Administration over the years covertly allotted 10% of the West Bank for further settlement. Provisional names for future new settlements or settlement expansions were already assigned. The plan includes many Palestinian built-up sites in the Areas A and B.
Some settlements are self-contained cities with a stable population in the tens of thousands, infrastructure, and all other features of permanence. Examples are Beitar Illit (a city of close to 45,000 residents), Ma'ale Adumim, Modi'in Illit, and Ariel (almost 20,000 residents). Some are towns with a local council status with populations of 2,000–20,0000, such as Alfei Menashe, Eli, Elkana, Efrat and Kiryat Arba. There are also clusters of villages governed by a local elected committee and regional councils that are responsible for municipal services. Examples are Kfar Adumim, Neve Daniel, Kfar Tapuach and Ateret. Kibbutzim and moshavim in the territories include Argaman, Gilgal, Niran and Yitav. Jewish neighborhoods have been built on the outskirts of Arab neighborhoods, for example in Hebron. In Jerusalem, there are urban neighborhoods where Jews and Arabs live together: the Muslim Quarter, Silwan, Abu Tor, Sheikh Jarrah and Shimon HaTzadik.
Under the Oslo Accords, the West Bank was divided into three separate parts designated as Area A, Area B and Area C. Leaving aside the position of East Jerusalem, all of the settlements are in Area C which comprises about 60% of the West Bank.
Some settlements were established on sites where Jewish communities had existed during the British Mandate of Palestine or even since the First Aliyah or ancient times.
At the end of 2010, 534,224 Jewish Israeli lived in the West Bank, including East Jerusalem. 314,132 of them lived in the 121 authorised settlements and 102 unauthorised settlement outposts on the West Bank, 198,629 were living in East Jerusalem, and almost 20,000 lived in settlements in the Golan Heights.
By 2011, the number of Jewish settlers in the West Bank excluding East Jerusalem had increased to 328,423 people.
In June 2014, the number of Israeli settlers in the West Bank excluding East Jerusalem had increased to 382,031 people, with over 20,000 Israeli settlers in the Golan Heights.
In January 2015, the Israeli Interior Ministry gave figures of 389,250 Israeli citizens living in the West Bank outside East Jerusalem.
By the end of 2016, the West Bank Jewish population had risen to 420,899, excluding East Jerusalem, where there were more than 200,000 Jews.
In 2019, the number of Israeli settlers in the West Bank excluding East Jerusalem had risen to 441,600 individuals, and the number of Israeli settlers in the Golan Heights had risen to 25,261.
In 2020, the number of Israeli settlers in the West Bank excluding East Jerusalem had reportedly risen to 451,700 individuals, with an additional 220,000 Jews living in East Jerusalem.
Based on various sources, population dispersal can be estimated as follows:
In addition to internal migration, in large though declining numbers, the settlements absorb annually about 1000 new immigrants from outside Israel. The American Kulanu organization works with such right-wing Israeli settler groups as Amishav and Shavei Israel to settle "lost" Jews of color in such areas where local Palestinians are being displaced. In the 1990s, the annual settler population growth was more than three times the annual population growth in Israel. Population growth has continued in the 2000s. According to the BBC, the settlements in the West Bank have been growing at a rate of 5–6% since 2001. In 2016, there were sixty thousand American Israelis living in settlements in the West Bank.
The establishment of settlements in the Palestinian territories is linked to the displacement of the Palestinian populations as evidenced by a 1979 Security Council Commission which established a link between Israeli settlements and the displacement of the local population. The commission also found that those who remained were under consistent pressure to leave to make room for further settlers who were being encouraged into the area. In conclusion the commission stated that settlement in the Palestinian territories was causing "profound and irreversible changes of a geographic and demographic nature".
The Israeli settlements in the West Bank fall under the administrative district of Judea and Samaria Area. Since December 2007, approval by both the Israeli Prime Minister and Israeli Defense Minister of all settlement activities (including planning) in the West Bank is required. Authority for planning and construction is held by the Israel Defense Forces Civil Administration.
The area consists of four cities, thirteen local councils and six regional councils.
The Yesha Council (Hebrew: מועצת יש"ע, Moatzat Yesha, a Hebrew acronym for Judea, Samaria and Gaza) is the umbrella organization of municipal councils in the West Bank.
The actual buildings of the Israeli settlements cover only 1 percent of the West Bank, but their jurisdiction and their regional councils extend to about 42 percent of the West Bank, according to the Israeli NGO B'Tselem. Yesha Council chairman Dani Dayan disputes the figures and claims that the settlements only control 9.2 percent of the West Bank.
Between 2001 and 2007 more than 10,000 Israeli settlement units were built, while 91 permits were issued for Palestinian construction, and 1,663 Palestinian structures were demolished in Area C.
West Bank Palestinians have their cases tried in Israel's military courts while Jewish Israeli settlers living in the same occupied territory are tried in civil courts. The arrangement has been described as "de facto segregation" by the UN Committee on the Elimination of Racial Discrimination. A bill to formally extend Israeli law to the Israeli settlements in the West Bank was rejected in 2012. The basic military laws governing the West Bank are influenced by what is called the "pipelining" of Israeli legislation. As a result of "enclave law", large portions of Israeli civil law are applied to Israeli settlements and Israeli residents in the occupied territories.
On 31 August 2014, Israel announced it was appropriating 400 hectares of land in the West Bank to eventually house 1,000 Israel families. The appropriation was described as the largest in more than 30 years. According to reports on Israel Radio, the development is a response to the 2014 kidnapping and murder of Israeli teenagers.
East Jerusalem is defined in the Jerusalem Law of 1980 as part of Israel and its capital, Jerusalem. As such it is administered as part of the city and its district, the Jerusalem District. Pre-1967 residents of East Jerusalem and their descendants have residency status in the city but many have refused Israeli citizenship. Thus, the Israeli government maintains an administrative distinction between Israeli citizens and non-citizens in East Jerusalem, but the Jerusalem municipality does not.
The Golan Heights is administered under Israeli civil law as the Golan sub-district, a part of the Northern District. Israel makes no legal or administrative distinction between pre-1967 communities in the Golan Heights (mainly Druze) and the post-1967 settlements.
After the capture of the Sinai Peninsula from Egypt in the 1967 Six-Day War, settlements were established along the Gulf of Aqaba and in northeast Sinai, just below the Gaza Strip. Israel had plans to expand the settlement of Yamit into a city with a population of 200,000, though the actual population of Yamit did not exceed 3,000. The Sinai Peninsula was returned to Egypt in stages beginning in 1979 as part of the Egypt–Israel peace treaty. As required by the treaty, in 1982 Israel evacuated the Israeli civilian population from the 18 Sinai settlements in Sinai. In some instances evacuations were done forcefully, such as the evacuation of Yamit. All the settlements were then dismantled.
Before Israel's unilateral disengagement plan in which the Israeli settlements were evacuated, there were 21 settlements in the Gaza Strip under the administration of the Hof Aza Regional Council. The land was allocated in such a way that each Israeli settler disposed of 400 times the land available to the Palestinian refugees, and 20 times the volume of water allowed to the peasant farmers of the Strip.
The consensus view in the international community is that the existence of Israeli settlements in the West Bank including East Jerusalem and the Golan Heights is in violation of international law. The Fourth Geneva Convention includes statements such as "the Occupying Power shall not deport or transfer parts of its own civilian population into the territory it occupies". On 20 December 2019, International Criminal Court chief prosecutor Fatou Bensouda announced an International Criminal Court investigation in Palestine into alleged war crimes committed during the Israeli–Palestinian conflict. At present, the view of the international community, as reflected in numerous UN resolutions, regards the building and existence of Israeli settlements in the West Bank, East Jerusalem and the Golan Heights as a violation of international law. UN Security Council Resolution 446 refers to the Fourth Geneva Convention as the applicable international legal instrument, and calls upon Israel to desist from transferring its own population into the territories or changing their demographic makeup. The reconvened Conference of the High Contracting Parties to the Geneva Conventions has declared the settlements illegal as has the primary judicial organ of the UN, the International Court of Justice.
The position of successive Israeli governments is that all authorized settlements are entirely legal and consistent with international law. In practice, Israel does not accept that the Fourth Geneva Convention applies de jure, but has stated that on humanitarian issues it will govern itself de facto by its provisions, without specifying which these are. The scholar and jurist Eugene Rostow has disputed the illegality of authorized settlements.
Under Israeli law, West Bank settlements must meet specific criteria to be legal. In 2009, there were approximately 100 small communities that did not meet these criteria and are referred to as illegal outposts.
In 2014 twelve EU countries warned businesses against involving themselves in the settlements. According to the warnings, economic activities relating to the settlements involve legal and economic risks stemming from the fact that the settlements are built on occupied land not recognized as Israel's.
The consensus of the international community – the vast majority of states, the overwhelming majority of legal experts, the International Court of Justice and the UN, is that settlements are in violation of international law. After the Six-Day War, in 1967, Theodor Meron, legal counsel to the Israeli Foreign Ministry stated in a legal opinion to the Prime Minister,
"My conclusion is that civilian settlement in the administered territories contravenes the explicit provisions of the Fourth Geneva Convention."
This legal opinion was sent to Prime Minister Levi Eshkol. However, it was not made public at the time. The Labor cabinet allowed settlements despite the warning. This paved the way for future settlement growth. In 2007, Meron stated that "I believe that I would have given the same opinion today."
In 1978, the Legal Adviser of the Department of State of the United States reached the same conclusion.
The International Court of Justice, in its advisory opinion, has since ruled that Israel is in breach of international law by establishing settlements in Occupied Palestinian Territory, including East Jerusalem. The Court maintains that Israel cannot rely on its right of self-defense or necessity to impose a regime that violates international law. The Court also ruled that Israel violates basic human rights by impeding liberty of movement and the inhabitants' right to work, health, education and an adequate standard of living.
International intergovernmental organizations such as the Conference of the High Contracting Parties to the Fourth Geneva Convention, major organs of the United Nations, the European Union, and Canada, also regard the settlements as a violation of international law. The Committee on the Elimination of Racial Discrimination wrote that "The status of the settlements was clearly inconsistent with Article 3 of the Convention, which, as noted in the Committee's General Recommendation XIX, prohibited all forms of racial segregation in all countries. There is a consensus among publicists that the prohibition of racial discrimination, irrespective of territories, is an imperative norm of international law." Amnesty International, and Human Rights Watch have also characterized the settlements as a violation of international law.
In late January 2013 a report drafted by three justices, presided over by Christine Chanet, and issued by the United Nations Human Rights Council declared that Jewish settlements constituted a creeping annexation based on multiple violations of the Geneva Conventions and international law, and stated that if Palestine ratified the Rome Accord, Israel could be tried for "gross violations of human rights law and serious violations of international humanitarian law." A spokesman for Israel's Foreign Ministry declared the report 'unfortunate' and accused the UN's Human Rights Council of a "systematically one-sided and biased approach towards Israel."
The Supreme Court of Israel, with a variety of different justices sitting, has repeatedly stated that Israel's presence in the West Bank is in violation of international law.
Four prominent jurists cited the concept of the "sovereignty vacuum" in the immediate aftermath of the Six-Day War to describe the legal status of the West Bank and Gaza: Yehuda Zvi Blum in 1968, Elihu Lauterpacht in 1968, Julius Stone in 1969 and 1981, and Stephen M. Schwebel in 1970. Eugene V. Rostow also argued in 1979 that the occupied territories' legal status was undetermined.
Professor Ben Saul took exception to this view, arguing that Article 49(6) can be read to include voluntary or assisted transfers, as indeed it was in the advisory opinion of the International Court of Justice which had expressed this interpretation in the Israeli Wall Advisory Opinion (2003).
Israel maintains that a temporary use of land and buildings for various purposes is permissible under a plea of military necessity and that the settlements fulfilled security needs. Israel argues that its settlement policy is consistent with international law, including the Fourth Geneva Convention, while recognising that some settlements have been constructed illegally on private land. The Israeli Supreme Court has ruled that the power of the Civil Administration and the Military Commander in the occupied territories is limited by the entrenched customary rules of public international law as codified in the Hague Regulations. In 1998 the Israeli Minister of Foreign Affairs produced "The International Criminal Court Background Paper". It concludes
International law has long recognised that there are crimes of such severity they should be considered "international crimes." Such crimes have been established in treaties such as the Genocide Convention and the Geneva Conventions.... The following are Israel's primary issues of concern [ie with the rules of the ICC]: The inclusion of settlement activity as a "war crime" is a cynical attempt to abuse the Court for political ends. The implication that the transfer of civilian population to occupied territories can be classified as a crime equal in gravity to attacks on civilian population centres or mass murder is preposterous and has no basis in international law.
A UN conference was held in Rome in 1998, where Israel was one of seven countries to vote against the Rome Statute to establish the International Criminal Court. Israel was opposed to a provision that included as a war crime the transfer of civilian populations into territory the government occupies. Israel has signed the statute, but not ratified the treaty.
A 1996 amendment to an Israeli military order, states that land privately owned can not be part of a settlement, unless the land in question has been confiscated for military purposes. In 2006 Peace Now acquired a report, which it claims was leaked from the Israeli Government's Civil Administration, indicating that up to 40 percent of the land Israel plans to retain in the West Bank is privately owned by Palestinians. Peace Now called this a violation of Israeli law. Peace Now published a comprehensive report about settlements on private lands. In the wake of a legal battle, Peace Now lowered the figure to 32 percent, which the Civil Administration also denied. The Washington Post reported that "The 38-page report offers what appears to be a comprehensive argument against the Israeli government's contention that it avoids building on private land, drawing on the state's own data to make the case."
In February 2008, the Civil Administration stated that the land on which more than a third of West Bank settlements was built had been expropriated by the IDF for "security purposes." The unauthorized seizure of private Palestinian land was defined by the Civil Administration itself as 'theft.' According to B'Tselem, more than 42 percent of the West Bank are under control of the Israeli settlements, 21 percent of which was seized from private Palestinian owners, much of it in violation of the 1979 Israeli Supreme Court decision.
In 1979, the government decided to extend settlements or build new ones only on "state lands".
A secret database, drafted by a retired senior officer, Baruch Spiegel, on orders from former defense minister Shaul Mofaz, found that some settlements deemed legal by Israel were illegal outposts, and that large portions of Ofra, Elon Moreh and Beit El were built on private Palestinian land. The "Spiegel report" was revealed by Haaretz in 2009. Many settlements are largely built on private lands, without approval of the Israeli Government. According to Israel, the bulk of the land was vacant, was leased from the state, or bought fairly from Palestinian landowners.
Invoking the Absentees' Property Laws to transfer, sell or lease property in East Jerusalem owned by Palestinians who live elsewhere without compensation has been criticized both inside and outside of Israel. Opponents of the settlements claim that "vacant" land belonged to Arabs who fled or collectively to an entire village, a practice that developed under Ottoman rule. B'Tselem charged that Israel is using the absence of modern legal documents for the communal land as a legal basis for expropriating it. These "abandoned lands" are sometimes laundered through a series of fraudulent sales.
According to Amira Hass, one of the techniques used by Israel to expropriate Palestinian land is to place desired areas under a 'military firing zone' classification, and then issue orders for the evacuation of Palestinians from the villages in that range, while allowing contiguous Jewish settlements to remain unaffected.
Amnesty International argues that Israel's settlement policy is discriminatory and a violation of Palestinian human rights. B'Tselem claims that Israeli travel restrictions impact on Palestinian freedom of movement and Palestinian human rights have been violated in Hebron due to the presence of the settlers within the city. According to B'Tselem, over fifty percent of West Bank land expropriated from Palestinians has been used to establish settlements and create reserves of land for their future expansion. The seized lands mainly benefit the settlements and Palestinians cannot use them. The roads built by Israel in the West Bank to serve the settlements are closed to Palestinian vehicles' and act as a barrier often between villages and the lands on which they subsist.
Human Rights Watch and other human rights observer volunteer regularly file reports on "settler violence," referring to stoning and shooting incidents involving Israeli settlers. Israel's withdrawal from Gaza and Hebron have led to violent settler protests and disputes over land and resources. Meron Benvenisti described the settlement enterprise as a "commercial real estate project that conscripts Zionist rhetoric for profit."
The construction of the Israeli West Bank barrier has been criticized as an infringement on Palestinian human and land rights. The United Nations Office for the Coordination of Humanitarian Affairs estimated that 10% of the West Bank would fall on the Israeli side of the barrier.
In July 2012, the UN Human Rights Council decided to set up a probe into Jewish settlements. The report of the independent international fact-finding mission which investigated the "implications of the Israeli settlements on the civil, political, economic, social and cultural rights of the Palestinian people throughout the Occupied Palestinian Territory" was published in February 2013.
In February 2020, the Office of the United Nations High Commissioner for Human Rights published a list of 112 companies linked to activities related to Israeli settlements in the occupied West Bank.
Goods produced in Israeli settlements are able to stay competitive on the global market, in part because of massive state subsidies they receive from the Israeli government. Farmers and producers are given state assistance, while companies that set up in the territories receive tax breaks and direct government subsidies. An Israeli government fund has also been established to help companies pay customs penalties. Palestinian officials estimate that settlers sell goods worth some $500 million to the Palestinian market. Israel has built 16 industrial zones, containing roughly 1000 industrial plants, in the West Bank and East Jerusalem on acreage that consumes large parts of the territory planned for a future Palestinian state. According to Jodi Rudoren these installations both entrench the occupation and provide work for Palestinians, even those opposed to it. The 16 parks are located at Shaked, Beka'ot, Baran, Karnei Shomron, Emmanuel, Barkan, Ariel, Shilo, Halamish, Ma'ale Efraim, Sha'ar Binyamin, Atarot, Mishor Adumim, Gush Etzion, Kiryat Arba and Metarim (2001). In spite of this, the West Bank settlements have failed to develop a self-sustaining local economy. About 60% of the settler workforce commutes to Israel for work. The settlements rely primarily on the labor of their residents in Israel proper rather than local manufacturing, agriculture, or research and development. Of the industrial parks in the settlements, there are only two significant ones, at Ma'ale Adumim and Barkan, with most of the workers there being Palestinian. Only a few hundred settler households cultivate agricultural land, and rely primarily on Palestinian labor in doing so.
Settlement has an economic dimension, much of it driven by the significantly lower costs of housing for Israeli citizens living in Israeli settlements compared to the cost of housing and living in Israel proper. Government spending per citizen in the settlements is double that spent per Israeli citizen in Tel Aviv and Jerusalem, while government spending for settlers in isolated Israeli settlements is three times the Israeli national average. Most of the spending goes to the security of the Israeli citizens living there.
According to Israeli government estimates, $230 million worth of settler goods including fruit, vegetables, cosmetics, textiles and toys are exported to the EU each year, accounting for approximately 2% of all Israeli exports to Europe. A 2013 report of Profundo revealed that at least 38 Dutch companies imported settlement products.
European Union law requires a distinction to be made between goods originating in Israel and those from the occupied territories. The former benefit from preferential custom treatment according to the EU-Israel Association Agreement (2000); the latter don't, having been explicitly excluded from the agreement. In practice, however, settler goods often avoid mandatory customs through being labelled as originating in Israel, while European customs authorities commonly fail to complete obligatory postal code checks of products to ensure they have not originated in the occupied territories.
In 2009, the United Kingdom's Department for the Environment, Food and Rural Affairs issued new guidelines concerning labelling of goods imported from the West Bank. The new guidelines require labelling to clarify whether West Bank products originate from settlements or from the Palestinian economy. Israel's foreign ministry said that the UK was "catering to the demands of those whose ultimate goal is the boycott of Israeli products"; but this was denied by the UK government, who said that the aim of the new regulations was to allow consumers to choose for themselves what produce they buy. Denmark has similar legislation requiring food products from settlements in the occupied territories to be accurately labelled. In June 2022, Norway also stated that it would begin complying with EU regulation to label produce originating from Israeli settlements in the West Bank and Golan Heights as such.
On 12 November 2019 the Court of Justice of the European Union in a ruling covering all territory Israel captured in the 1967 war decided that labels on foodstuffs must not imply that goods produced in occupied territory came from Israel itself and must "prevent consumers from being misled as to the fact that the State of Israel is present in the territories concerned as an occupying power and not as a sovereign entity". In its ruling, the court said that failing to inform EU consumers they were potentially buying goods produced in settlements denies them access to "ethical considerations and considerations relating to the observance of international law".
In January 2019 the Dail (Ireland's lower house) voted in favour, by 78 to 45, of the Control of Economic Activity (Occupied Territories) bill. This piece of legislation prohibits the purchasing of any good and/or service from the Golan Heights, East Jerusalem or West Bank settlements. As of February 2019 the bill has some stages to be completed,once codified, either a five-year jail sentence or fines of up to €250,000 ($284,000) will affect anyone who breaks this law.
A petition under the European Citizens' Initiative, submitted in September 2021, was accepted on 20 February 2022. The petition seeks the adoption of legislation to ban trade with unlawful settlements. The petition requires a million signatures from across the EU and has received support from civil society groups including Human Rights Watch.
A Palestinian report argued in 2011 that settlements have a detrimental effect on the Palestinian economy, equivalent to about 85% of the nominal gross domestic product of Palestine, and that the "occupation enterprise" allows the state of Israel and commercial firms to profit from Palestinian natural resources and tourist potential. A 2013 report published by the World Bank analysed the impact that the limited access to Area C lands and resources had on the Palestinian economy. While settlements represent a single axis of control, it is the largest with 68% of the Area C lands reserved for the settlements. The report goes on to calculate that access to the lands and resources of Area C, including the territory in and around settlements, would increase the Palestinian GDP by some $3.5 billion (or 35%) per year.
The Israeli Supreme Court has ruled that Israeli companies are entitled to exploit the West Bank's natural resources for economic gain, and that international law must be "adapted" to the "reality on the ground" of long-term occupation.
Due to the availability of jobs offering twice the prevailing salary of the West Bank (as of August 2013), as well as high unemployment, tens of thousands of Palestinians work in Israeli settlements. According to the Manufacturers Association of Israel, some 22,000 Palestinians were employed in construction, agriculture, manufacturing and service industries. An Al-Quds University study in 2011 found that 82% of Palestinian workers said they would prefer to not work in Israeli settlements if they had alternative employment in the West Bank.
Palestinians have been highly involved in the construction of settlements in the West Bank. In 2013, the Palestinian Central Bureau of Statistics released their survey showing that the number of Palestinian workers who are employed by the Jewish settlements increased from 16,000 to 20,000 in the first quarter. The survey also found that Palestinians who work in Israel and the settlements are paid more than twice their salary compared to what they receive from Palestinian employers.
In 2008, Kav LaOved charged that Palestinians who work in Israeli settlements are not granted basic protections of Israeli labor law. Instead, they are employed under Jordanian labor law, which does not require minimum wage, payment for overtime and other social rights. In 2007, the Supreme Court of Israel ruled that Israeli labor law does apply to Palestinians working in West Bank settlements and applying different rules in the same work place constituted discrimination. The ruling allowed Palestinian workers to file lawsuits in Israeli courts. In 2008, the average sum claimed by such lawsuits stood at 100,000 shekels.
According to Palestinian Center for Policy and Survey Research, 63% of Palestinians opposed PA plans to prosecute Palestinians who work in the settlements. However, 72% of Palestinians support a boycott of the products they sell. Although the Palestinian Authority has criminalized working in the settlements, the director-general at the Palestinian Ministry of Labor, Samer Salameh, described the situation in February 2014 as being "caught between two fires". He said "We strongly discourage work in the settlements, since the entire enterprise is illegal and illegitimate...but given the high unemployment rate and the lack of alternatives, we do not enforce the law that criminalizes work in the settlements."
Gush Emunim Underground was a militant organization that operated in 1979–1984. The organization planned attacks on Palestinian officials and the Dome of the Rock. In 1994, Baruch Goldstein of Hebron, a member of Kach carried out the Cave of the Patriarchs massacre, killing 29 Muslim worshipers and injuring 125. The attack was widely condemned by the Israeli government and Jewish community. The Palestinian leadership has accused Israel of "encouraging and enabling" settler violence in a bid to provoke Palestinian riots and violence in retaliation. Violence perpetrated by Israeli settlers against Palestinians constitutes terrorism according to the U.S. Department of State, and former IDF Head of Central Command Avi Mizrahi stated that such violence constitutes "terror."
In mid-2008, a UN report recorded 222 acts of Israeli settler violence against Palestinians and IDF troops compared with 291 in 2007. This trend reportedly increased in 2009. Maj-Gen Shamni said that the number had risen from a few dozen individuals to hundreds, and called it "a very grave phenomenon." In 2008–2009, the defense establishment adopted a harder line against the extremists. This group responded with a tactic dubbed "price tagging", vandalizing Palestinian property whenever police or soldiers were sent in to dismantle outposts. From January through to September 2013, 276 attacks by settlers against Palestinians were recorded.
Leading religious figures in the West Bank have harshly criticized these tactics. Rabbi Menachem Froman of Tekoa said that "Targeting Palestinians and their property is a shocking thing, ... It's an act of hurting humanity. ... This builds a wall of fire between Jews and Arabs." The Yesha Council and Hanan Porat also condemned such actions. Other rabbis have been accused of inciting violence against non-Jews. In response to settler violence, the Israeli government said that it would increase law enforcement and cut off aid to illegal outposts. Some settlers are thought to lash out at Palestinians because they are "easy victims." The United Nations accused Israel of failing to intervene and arrest settlers suspected of violence. In 2008, Haaretz wrote that "Israeli society has become accustomed to seeing lawbreaking settlers receive special treatment and no other group could similarly attack Israeli law enforcement agencies without being severely punished."
In September 2011, settlers vandalized a mosque and an army base. They slashed tires and cut cables of 12 army vehicles and sprayed graffiti. In November 2011, the United Nations Office for Coordination of Human Affairs (OCHA) in the Palestinian territories published a report on settler violence that showed a significant rise compared to 2009 and 2010. The report covered physical violence and property damage such as uprooted olive trees, damaged tractors and slaughtered sheep. The report states that 90% of complaints filed by Palestinians have been closed without charge.
According to EU reports, Israel has created an "atmosphere of impunity" for Jewish attackers, which is seen as tantamount to tacit approval by the state. In the West Bank, Jews and Palestinians live under two different legal regimes and it is difficult for Palestinians to lodge complaints, which must be filed in Hebrew in Israeli settlements.
The 27 ministers of foreign affairs of the European Union published a report in May 2012 strongly denouncing policies of the State of Israel in the West Bank and denouncing "continuous settler violence and deliberate provocations against Palestinian civilians." The report by all EU ministers called "on the government of Israel to bring the perpetrators to justice and to comply with its obligations under international law."
In July 2014, a day after the burial of three murdered Israeli teens, Khdeir, a 16-year-old Palestinian, was forced into a car by 3 Israeli settlers on an East Jerusalem street. His family immediately reported the fact to Israeli Police who located his charred body a few hours later at Givat Shaul in the Jerusalem Forest. Preliminary results from the autopsy suggested that he was beaten and burnt while still alive. The murder suspects explained the attack as a response to the June abduction and murder of three Israeli teens. The murders contributed to a breakout of hostilities in the 2014 Israel–Gaza conflict. In July 2015, a similar incident occurred where Israeli settlers made an arson attack on two Palestinian houses, one of which was empty; however, the other was occupied, resulting in the burning to death of a Palestinian infant; the four other members of his family were evacuated to the hospital suffering serious injuries. These two incidents received condemnation from the United States, European Union and the IDF. The European Union criticized Israel for "failing to protect the Palestinian population".
While the economy of the Palestinian territories has shown signs of growth, the International Committee of the Red Cross reported that Palestinian olive farming has suffered. According to the ICRC, 10,000 olive trees were cut down or burned by settlers in 2007–2010. Foreign ministry spokesman Yigal Palmor said the report ignored official PA data showing that the economic situation of Palestinians had improved substantially, citing Mahmoud Abbas's comment to The Washington Post in May 2009, where he said "in the West Bank, we have a good reality, the people are living a normal life."
Haaretz blamed the violence during the olive harvest on a handful of extremists. In 2010, trees belonging to both Jews and Arabs were cut down, poisoned or torched. In the first two weeks of the harvest, 500 trees owned by Palestinians and 100 trees owned by Jews had been vandalized. In October 2013, 100 trees were cut down.
Violent attacks on olive trees seem to be facilitated by the apparently systematic refusal of the Israeli authorities to allow Palestinians to visit their own groves, sometimes for years, especially in cases where the groves are deemed to be too close to settlements.
Israeli civilians living in settlements have been targeted by violence from armed Palestinian groups. These groups, according to Human Rights Watch, assert that settlers are "legitimate targets" that have "forfeited their civilian status by residing in settlements that are illegal under international humanitarian law." Both Human Rights Watch and B'tselem rejected this argument on the basis that the legal status of the settlements has no effect on the civilian status of their residents. Human Rights Watch said the "prohibition against intentional attacks against civilians is absolute." B'tselem said "The settlers constitute a distinctly civilian population, which is entitled to all the protections granted civilians by international law. The Israeli security forces' use of land in the settlements or the membership of some settlers in the Israeli security forces does not affect the status of the other residents living among them, and certainly does not make them proper targets of attack."
Fatal attacks on settlers have included firing of rockets and mortars and drive-by shootings, also targeting infants and children. Violent incidents include the murder of Shalhevet Pass, a ten-month-old baby shot by a Palestinian sniper in Hebron, and the murder of two teenagers by unknown perpetrators on 8 May 2001, whose bodies were hidden in a cave near Tekoa, a crime that Israeli authorities suggest may have been committed by Palestinian terrorists. In the Bat Ayin axe attack, children in Bat Ayin were attacked by a Palestinian wielding an axe and a knife. A 13-year-old boy was killed and another was seriously wounded. Rabbi Meir Hai, a father of seven, was killed in a drive-by shooting. In August 2011, five members of one family were killed in their beds. The victims were the father Ehud (Udi) Fogel, the mother Ruth Fogel, and three of their six children—Yoav, 11, Elad, 4, and Hadas, the youngest, a three-month-old infant. According to David Ha'ivri, and as reported by multiple sources, the infant was decapitated.
Pro-Palestinian activists who hold regular protests near the settlements have been accused of stone-throwing, physical assault and provocation. In 2008, Avshalom Peled, head of the Israel Police's Hebron district, called "left-wing" activity in the city dangerous and provocative, and accused activists of antagonizing the settlers in the hope of getting a reaction.
Municipal Environmental Associations of Judea and Samaria, an environmental awareness group, was established by the settlers to address sewage treatment problems and cooperate with the Palestinian Authority on environmental issues. According to a 2004 report by Friends of the Earth Middle East, settlers account for 10% of the population in the West Bank but produce 25% of the sewage output. Beit Duqqu and Qalqilyah have accused settlers of polluting their farmland and villagers claim children have become ill after swimming in a local stream. Legal action was taken against 14 settlements by the Israeli Ministry of the Environment. The Palestinian Authority has also been criticized by environmentalists for not doing more to prevent water pollution. Settlers and Palestinians share the mountain aquifer as a water source, and both generate sewage and industrial effluents that endanger the aquifer. Friends of the Earth Middle East claimed that sewage treatment was inadequate in both sectors. Sewage from Palestinian sources was estimated at 46 million cubic meters a year, and sources from settler sources at 15 million cubic meters a year. A 2004 study found that sewage was not sufficiently treated in many settlements, while sewage from Palestinian villages and cities flowed into unlined cesspits, streams and the open environment with no treatment at all.
In a 2007 study, the Israel Nature and Parks Authority and Israeli Ministry of Environmental Protection, found that Palestinian towns and cities produced 56 million cubic meters of sewage per year, 94 percent discharged without adequate treatment, while Israeli sources produced 17.5 million cubic meters per year, 31.5 percent without adequate treatment.
According to Palestinian environmentalists, the settlers operate industrial and manufacturing plants that can create pollution as many do not conform to Israeli standards. In 2005, an old quarry between Kedumim and Nablus was slated for conversion into an industrial waste dump. Pollution experts warned that the dump would threaten Palestinian water sources.
The Consortium for Applied Research on International Migration (CARIM) has reported in their 2011 migration profile for Palestine that the reasons for individuals to leave the country are similar to those of other countries in the region and they attribute less importance to the specific political situation of the occupied Palestinian territory. Human Rights Watch in 2010 reported that Israeli settlement policies have had the effect of "forcing residents to leave their communities".
In 2008, Condoleezza Rice suggested sending Palestinian refugees to South America, which might reduce pressure on Israel to withdraw from the settlements. Sushil P. Seth speculates that Israelis seem to feel that increasing settlements will force many Palestinians to flee to other countries and that the remainder will be forced to live under Israeli terms. Speaking anonymously with regard to Israeli policies in the South Hebron Hills, a UN expert said that the Israeli crackdown on alternative energy infrastructures like solar panels is part of a deliberate strategy in Area C.
"From December 2010 to April 2011, we saw a systematic targeting of the water infrastructure in Hebron, Bethlehem and the Jordan valley. Now, in the last couple of months, they are targeting electricity. Two villages in the area have had their electrical poles demolished. There is this systematic effort by the civil administration targeting all Palestinian infrastructure in Hebron. They are hoping that by making it miserable enough, they [the Palestinians] will pick up and leave."
Approximately 1,500 people in 16 communities are dependent on energy produced by these installations duct business are threatened with work stoppage orders from the Israeli administration on their installation of alternative power infrastructure, and demolition orders expected to follow will darken the homes of 500 people.
Ariel University, formerly the College of Judea and Samaria, is the major Israeli institution of higher education in the West Bank. With close to 13,000 students, it is Israel's largest public college. The college was accredited in 1994 and awards bachelor's degrees in arts, sciences, technology, architecture and physical therapy. On 17 July 2012, the Council for Higher Education in Judea and Samaria voted to grant the institution full university status.
Teacher training colleges include Herzog College in Alon Shvut and Orot Israel College in Elkana. Ohalo College is located in Katzrin, in the Golan Heights. Curricula at these institutions are overseen by the Council for Higher Education in Judea and Samaria (CHE-JS).
In March 2012, The Shomron Regional Council was awarded the Israeli Ministry of Education's first prize National Education Award in recognizing its excellence in investing substantial resources in the educational system. The Shomron Regional Council achieved the highest marks in all parameters (9.28 / 10). Gershon Mesika, the head of the regional council, declared that the award was a certificate of honour of its educators and the settlement youth who proved their quality and excellence.
In 1983 an Israeli government plan entitled "Master Plan and Development Plan for Settlement in Samaria and Judea" envisaged placing a "maximally large Jewish population" in priority areas to accomplish incorporation of the West Bank in the Israeli "national system". According to Ariel Sharon, strategic settlement locations would work to preclude the formation of a Palestinian state.
Palestinians argue that the policy of settlements constitutes an effort to preempt or sabotage a peace treaty that includes Palestinian sovereignty, and claim that the presence of settlements harm the ability to have a viable and contiguous state. This was also the view of the Israeli Vice Prime Minister Haim Ramon in 2008, saying "the pressure to enlarge Ofra and other settlements does not stem from a housing shortage, but rather is an attempt to undermine any chance of reaching an agreement with the Palestinians ..."
The Israel Foreign Ministry asserts that some settlements are legitimate, as they took shape when there was no operative diplomatic arrangement, and thus they did not violate any agreement. Based on this, they assert that:
An early evacuation took place in 1982 as part of the Egypt–Israel peace treaty, when Israel was required to evacuate its settlers from the 18 Sinai settlements. Arab parties to the conflict had demanded the dismantlement of the settlements as a condition for peace with Israel. The evacuation was carried out with force in some instances, for example in Yamit. The settlements were demolished, as it was feared that settlers might try to return to their homes after the evacuation.
Israel's unilateral disengagement plan took place in 2005. It involved the evacuation of settlements in the Gaza Strip and part of the West Bank, including all 21 settlements in Gaza and four in the West Bank, while retaining control over Gaza's borders, coastline, and airspace. Most of these settlements had existed since the early 1980s, some were over 30 years old; the total population involved was more than 10,000. There was significant opposition to the plan among parts of the Israeli public, and especially those living in the territories. George W. Bush said that a permanent peace deal would have to reflect "demographic realities" in the West Bank regarding Israel's settlements.
Within the former settlements, almost all buildings were demolished by Israel, with the exception of certain government and religious structures, which were completely emptied. Under an international arrangement, productive greenhouses were left to assist the Palestinian economy but about 30% of these were destroyed within hours by Palestinian looters. Following the withdrawal, many of the former synagogues were torched and destroyed by Palestinians.
Some believe that settlements need not necessarily be dismantled and evacuated, even if Israel withdraws from the territory where they stand, as they can remain under Palestinian rule. These ideas have been expressed both by left-wing Israelis, and by Palestinians who advocate the two-state solution, and by extreme Israeli right-wingers and settlers who object to any dismantling and claim links to the land that are stronger than the political boundaries of the state of Israel.
The Israeli government has often threatened to dismantle outposts. Some have actually been dismantled, occasionally with use of force; this led to settler violence.
American refusal to declare the settlements illegal was said to be the determining factor in the 2011 attempt to declare Palestinian statehood at the United Nations, the so-called Palestine 194 initiative.
Israel announced additional settlements in response to the Palestinian diplomatic initiative and Germany responded by moving to stop deliveries to Israel of submarines capable of carrying nuclear weapons.
Finally in 2012, several European states switched to either abstain or vote for statehood in response to continued settlement construction. Israel approved further settlements in response to the vote, which brought further worldwide condemnation.
The settlements have been a source of tension between Israel and the U.S. Jimmy Carter regarded the settlements as illegal and tactically unwise. Ronald Reagan stated that they were legal but an obstacle to negotiations. In 1991, the U.S. delayed a subsidized loan to pressure Israel on the subject of settlement-building in the Jerusalem-Bethlehem corridor. In 2005, U.S. declared support for "the retention by Israel of major Israeli population centers as an outcome of negotiations," reflecting the statement by George W. Bush that a permanent peace treaty would have to reflect "demographic realities" in the West Bank. In June 2009, Barack Obama said that the United States "does not accept the legitimacy of continued Israeli settlements."
Palestinians claim that Israel has undermined the Oslo accords and peace process by continuing to expand the settlements. Settlements in the Sinai Peninsula were evacuated and razed in the wake of the peace agreement with Egypt. The 27 ministers of foreign affairs of the European Union published a report in May 2012 strongly denouncing policies of the State of Israel in the West Bank and finding that Israeli settlements in the West Bank are illegal and "threaten to make a two-state solution impossible." In the framework of the Oslo I Accord of 1993 between the Israeli government and the Palestine Liberation Organization (PLO), a modus vivendi was reached whereby both parties agreed to postpone a final solution on the destination of the settlements to the permanent status negotiations (Article V.3). Israel claims that settlements thereby were not prohibited, since there is no explicit interim provision prohibiting continued settlement construction, the agreement does register an undertaking by both sides, namely that "Neither side shall initiate or take any step that will change the status of the West Bank and the Gaza Strip pending the outcome of the permanent status negotiations" (Article XXX1 (7)), which has been interpreted as, not forbidding settlements, but imposing severe restrictions on new settlement building after that date. Melanie Jacques argued in this context that even 'agreements between Israel and the Palestinians which would allow settlements in the OPT, or simply tolerate them pending a settlement of the conflict, violate the Fourth Geneva Convention.'
Final status proposals have called for retaining long-established communities along the Green Line and transferring the same amount of land in Israel to the Palestinian state. The Clinton administration proposed that Israel keep some settlements in the West Bank, especially those in large blocs near the pre-1967 borders of Israel, with the Palestinians receiving concessions of land in other parts of the country. Both Clinton and Tony Blair pointed out the need for territorial and diplomatic compromise based on the validity of some of the claims of both sides.
As Minister of Defense, Ehud Barak approved a plan requiring security commitments in exchange for withdrawal from the West Bank. Barak also expressed readiness to cede parts of East Jerusalem and put the holy sites in the city under a "special regime."
On 14 June 2009, Israeli Prime Minister Benjamin Netanyahu, as an answer to U.S. President Barack Obama's speech in Cairo, delivered a speech setting out his principles for a Palestinian-Israeli peace, among others, he alleged "... we have no intention of building new settlements or of expropriating additional land for existing settlements." In March 2010, the Netanyahu government announced plans for building 1,600 housing units in Ramat Shlomo across the Green Line in East Jerusalem during U.S. Vice President Joe Biden's visit to Israel causing a diplomatic row.
On 6 September 2010, Jordanian King Abdullah II and Syrian President Bashar al-Assad said that Israel would need to withdraw from all of the lands occupied in 1967 in order to achieve peace with the Palestinians.
Bradley Burston has said that a negotiated or unilateral withdraw from most of the settlements in the West Bank is gaining traction in Israel.
In November 2010, the United States offered to "fight against efforts to delegitimize Israel" and provide extra arms to Israel in exchange for a continuation of the settlement freeze and a final peace agreement, but failed to come to an agreement with the Israelis on the exact terms.
In December 2010, the United States criticised efforts by the Palestinian Authority to impose borders for the two states through the United Nations rather than through direct negotiations between the two sides. In February 2011, it vetoed a draft resolution to condemn all Jewish settlements established in the occupied Palestinian territory since 1967 as illegal. The resolution, which was supported by all other Security Council members and co-sponsored by nearly 120 nations, would have demanded that "Israel, as the occupying power, immediately and completely ceases all settlement activities in the occupied Palestinian territory, including East Jerusalem and that it fully respect its legal obligations in this regard." The U.S. representative said that while it agreed that the settlements were illegal, the resolution would harm chances for negotiations. Israel's deputy Foreign Minister, Daniel Ayalon, said that the "UN serves as a rubber stamp for the Arab countries and, as such, the General Assembly has an automatic majority," and that the vote "proved that the United States is the only country capable of advancing the peace process and the only righteous one speaking the truth: that direct talks between Israel and the Palestinians are required." Palestinian negotiators, however, have refused to resume direct talks until Israel ceases all settlement activity.
In November 2009, Israeli Prime Minister Netanyahu issued a 10-month settlement freeze in the West Bank in an attempt to restart negotiations with the Palestinians. The freeze did not apply to building in Jerusalem in areas across the green line, housing already under construction and existing construction described as "essential for normal life in the settlements" such as synagogues, schools, kindergartens and public buildings. The Palestinians refused to negotiate without a complete halt to construction. In the face of pressure from the United States and most world powers supporting the demand by the Palestinian Authority that Israel desist from settlement project in 2010, Israel's ambassador to the UN Meron Reuben said Israel would only stop settlement construction after a peace agreement is concluded, and expressed concern were Arab countries to press for UN recognition of a Palestinian state before such an accord. He cited Israel's dismantlement of settlements in both the Sinai which took place after a peace agreement, and its unilateral dismantlement of settlements in the Gaza Strip. He presumed that settlements would stop being built were Palestinians to establish a state in a given area.
The Clinton Parameters, a 2000 peace proposal by then U.S. President Bill Clinton, included a plan on which the Palestinian State was to include 94–96% of the West Bank, and around 80% of the settlers were to be under Israeli sovereignty, and in exchange for that, Israel will concede some territory (so called 'Territory Exchange' or 'Land Swap') within the Green Line (1967 borders). The swap would consist of 1–3% of Israeli territory, such that the final borders of the West Bank part of the Palestinian state would include 97% of the land of the original borders.
In 2010, Palestinian Authority President Mahmoud Abbas said that the Palestinians and Israel have agreed on the principle of a land swap. The issue of the ratio of land Israel would give to the Palestinians in exchange for keeping settlement blocs is an issue of dispute, with the Palestinians demanding that the ratio be 1:1, and Israel insisting that other factors be considered as well.
Under any peace deal with the Palestinians, Israel intends to keep the major settlement blocs close to its borders, which contain over 80% of the settlers. Prime Ministers Yitzhak Rabin, Ariel Sharon, and Benjamin Netanyahu have all stated Israel's intent to keep such blocs under any peace agreement. U.S. President George W. Bush acknowledged that such areas should be annexed to Israel in a 2004 letter to Prime Minister Sharon.
The European Union position is that any annexation of settlements should be done as part of mutually agreed land swaps, which would see the Palestinians controlling territory equivalent to the territory captured in 1967. The EU says that it will not recognise any changes to the 1967 borders without an agreement between the parties.
Israeli Foreign Minister Avigdor Lieberman has proposed a plan which would see settlement blocs annexed to Israel in exchange for heavily Arab areas inside Israel as part of a population exchange.
According to Mitchell G. Bard: "Ultimately, Israel may decide to unilaterally disengage from the West Bank and determine which settlements it will incorporate within the borders it delineates. Israel would prefer, however, to negotiate a peace treaty with the Palestinians that would specify which Jewish communities will remain intact within the mutually agreed border of Israel, and which will need to be evacuated. Israel will undoubtedly insist that some or all of the "consensus" blocs become part of Israel".
A number of proposals for the granting of Palestinian citizenship or residential permits to Jewish settlers in return for the removal of Israeli military installations from the West Bank have been fielded by such individuals as Arafat, Ibrahim Sarsur and Ahmed Qurei. In contrast, Mahmoud Abbas said in July 2013 that "In a final resolution, we would not see the presence of a single Israeli—civilian or soldier—on our lands."
Israeli Minister Moshe Ya'alon said in April 2010 that "just as Arabs live in Israel, so, too, should Jews be able to live in Palestine." ... "If we are talking about coexistence and peace, why the [Palestinian] insistence that the territory they receive be ethnically cleansed of Jews?".
The idea has been expressed by both advocates of the two-state solution and supporters of the settlers and conservative or fundamentalist currents in Israeli Judaism that, while objecting to any withdrawal, claim stronger links to the land than to the State of Israel.
On 19 June 2011, Haaretz reported that the Israeli cabinet voted to revoke Defense Minister Ehud Barak's authority to veto new settlement construction in the West Bank, by transferring this authority from the Agriculture Ministry, headed by Barak ally Orit Noked, to the Prime Minister's office.
In 2009, newly elected Prime Minister Benjamin Netanyahu said: "I have no intention of building new settlements in the West Bank... But like all the governments there have been until now, I will have to meet the needs of natural growth in the population. I will not be able to choke the settlements." On 15 October 2009, he said the settlement row with the United States had been resolved.
In April 2012, four illegal outposts were retroactively legalized by the Israeli government. In June 2012, the Netanyahu government announced a plan to build 851 homes in five settlements: 300 units in Beit El and 551 units in other settlements.
Amid peace negotiations that showed little signs of progress, Israel issued on 3 November 2013, tenders for 1,700 new homes for Jewish settlers. The plots were offered in nine settlements in areas Israel says it intends to keep in any peace deal with the Palestinians. On 12 November, Peace Now revealed that the Construction and Housing Ministry had issued tenders for 24,000 more settler homes in the West Bank, including 4,000 in East Jerusalem. 2,500 units were planned in Ma'aleh Adumim, some 9,000 in the Gush Etzion Region, and circa 12,000 in the Binyamin Region, including 1,200 homes in the E1 area in addition to 3,000 homes in previously frozen E1 projects. Circa 15,000 homes of the 24,000 plan would be east of the West Bank Barrier and create the first new settlement blocs for two decades, and the first blocs ever outside the Barrier, far inside the West Bank.
As stated before, the Israeli government (as of 2015) has a program of residential subsidies in which Israeli settlers receive about double that given to Israelis in Tel Aviv and Jerusalem. As well, settlers in isolated areas receive three times the Israeli national average. From the beginning of 2009 to the end of 2013, the Israeli settlement population as a whole increased by a rate of over 4% per year. A New York Times article in 2015 stated that said building had been "at the heart of mounting European criticism of Israel."
United Nations Security Council Resolution 2334 "Requests the Secretary-General to report to the Council every three months on the implementation of the provisions of the present resolution;" In the first of these reports, delivered verbally at a security council meeting on 24 March 2017, United Nations Special Coordinator for the Middle East Peace Process, Nickolay Mladenov, noted that Resolution 2334 called on Israel to take steps to cease all settlement activity in the Occupied Palestinian Territory, that "no such steps have been taken during the reporting period" and that instead, there had been a marked increase in statements, announcements and decisions related to construction and expansion.
The 2017 Settlement Regularization in "Judea and Samaria" Law permits backdated legalization of outposts constructed on private Palestinian land. Following a petition challenging its legality, on June 9, 2020, Israel's Supreme Court struck down the law that had retroactively legalized about 4,000 settler homes built on privately owned Palestinian land. The Israeli Attorney General has stated that existing laws already allow legalization of Israeli constructions on private Palestinian land in the West Bank. The Israeli Attorney General, Avichai Mandelblit, has updated the High Court on his official approval of the use of a legal tactic permitting the de facto legalization of roughly 2,000 illegally built Israeli homes throughout the West Bank. The legal mechanism is known as "market regulation" and relies on the notion that wildcat Israeli homes built on private Palestinian land were done so in good faith.
In a report of 22 July 2019, PeaceNow notes that after a gap of 6 years when there were no new outposts, establishment of new outposts recommenced in 2012, with 32 of the current 126 outposts set up to date. 2 outposts were subject to eviction, 15 were legalized and at least 35 are in process of legalization.
The Israeli government announced in 2019 that it has made monetary grants available for the construction of hotels in Area C of the West Bank.
According to Peace Now, approvals for building in Israeli settlements in East Jerusalem expanded by 60% between 2017, when Donald Trump became US president, and 2019.
On 9 July 2021, Michael Lynk, U.N. special rapporteur on human rights in the occupied Palestinian territory, addressing a session of the UN Human Rights Council in Geneva, said "I conclude that the Israeli settlements do amount to a war crime," and "I submit to you that this finding compels the international community...to make it clear to Israel that its illegal occupation, and its defiance of international law and international opinion, can and will no longer be cost-free." Israel, which does not recognize Lynk's mandate, boycotted the session.
A new Israeli government, formed on 13 June 2021, declared a "status quo" in the settlements policy. According to Peace Now, as of 28 October this has not been the case. On October 24, 2021, tenders were published for 1,355 housing units plus another 83 in Givat HaMatos and on 27 October 2021, approval was given for 3,000 housing units including in settlements deep inside the West Bank. These developments were condemned by the U.S. as well as by the United Kingdom, Russia and 12 European countries. while UN experts, Michael Lynk, Special Rapporteur on the situation of human rights in the Palestinian Territory occupied since 1967 and Mr. Balakrishnan Rajagopal (United States of America), UN Special Rapporteur on adequate housing said that settlement expansion should be treated as a "presumptive war crime".
In February 2023, the new Israeli government under Benjamin Netanyahu approved the legalization of nine illegal settler outposts in the West Bank. Finance Minister Bezalel Smotrich took charge of most of the Civil Administration, obtaining broad authority over civilian issues in the West Bank. In March 2023, Netanyahu's government repealed a 2005 law whereby four Israeli settlements, Homesh, Sa-Nur, Ganim and Kadim, were dismantled as part of the Israeli disengagement from Gaza. In June 2023, Israel shortened the procedure of approving settlement construction and gave Finance Minister Smotrich the authority to approve one of the stages, changing the system operating for the last 27 years. In its first six months, construction of 13,000 housing units in settlements, almost triple the amount advanced in the whole of 2022. | [
{
"paragraph_id": 0,
"text": "Israeli settlements or colonies are civilian communities where Israeli citizens live, almost exclusively of Jewish identity or ethnicity, built on lands occupied by Israel since the Six-Day War in 1967. The international community considers Israeli settlements to be illegal under international law, though Israel disputes this.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Israeli settlements currently exist in the West Bank (including East Jerusalem), claimed by the State of Palestine as its sovereign territory, and in the Golan Heights, which is internationally considered Syrian territory. East Jerusalem and the Golan Heights have been effectively annexed by Israel, though the international community has rejected any change of status and considers each occupied territory. Although the West Bank settlements are on land administered under Israeli military rule rather than civil law, Israeli civil law is \"pipelined\" into the settlements, such that Israeli citizens living there are treated similarly to those living in Israel. In the West Bank, Israel continues to expand its remaining settlements as well as settling new areas, despite pressure from the international community to desist. The international community regards both territories as held under Israeli occupation and the localities established there to be illegal settlements. The International Court of Justice found the settlements to be illegal in its 2004 advisory opinion on the West Bank barrier.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As of January 2023, there are 144 Israeli settlements in the West Bank, including 12 in East Jerusalem. There are over 100 Israeli illegal outposts in the West Bank. In total, over 450,000 Israeli settlers live in the West Bank excluding East Jerusalem, with an additional 220,000 Jewish settlers residing in East Jerusalem. Additionally, over 25,000 Israeli settlers live in the Golan Heights. Israeli settlements had previously been built within the Egyptian territory of the Sinai Peninsula, and within the Palestinian territory of the Gaza Strip; however, Israel evacuated and dismantled the 18 Sinai settlements following the 1979 Egypt–Israel peace agreement and all of the 21 settlements in the Gaza Strip, along with four in the West Bank, in 2005 as part of its unilateral disengagement from Gaza.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The transfer by an occupying power of its civilian population into the territory it occupies is a war crime, although Israel disputes that this applies to the West Bank. On 20 December 2019, the International Criminal Court announced an International Criminal Court investigation in Palestine into alleged war crimes. The presence and ongoing expansion of existing settlements by Israel and the construction of settlement outposts is frequently criticized as an obstacle to the Israeli–Palestinian peace process by the Palestinians, and third parties such as the OIC, the United Nations, Russia, the United Kingdom, France, and the European Union have echoed those criticisms. The international community considers the settlements to be illegal under international law, and the United Nations has repeatedly upheld the view that Israel's construction of settlements constitutes a violation of the Fourth Geneva Convention. The United States for decades considered the settlements to be \"illegitimate\", until the Trump administration in November 2019 shifted its position, declaring \"the establishment of Israeli civilian settlements in the West Bank is not per se inconsistent with international law.\"",
"title": ""
},
{
"paragraph_id": 4,
"text": "Certain observers and Palestinians occasionally use the term \"Israeli colonies\" as a substitute for the term \"settlements\". Settlements range in character from farming communities and frontier villages to urban suburbs and neighborhoods. The four largest settlements, Modi'in Illit, Ma'ale Adumim, Beitar Illit and Ariel, have achieved city status. Ariel has 18,000 residents, while the rest have around 37,000 to 55,500 each.",
"title": "Name and characterization"
},
{
"paragraph_id": 5,
"text": "Settlement has an economic dimension, much of it driven by the significantly lower costs of housing for Israeli citizens living in Israeli settlements compared to the cost of housing and living in Israel proper. Government spending per citizen in the settlements is double that spent per Israeli citizen in Tel Aviv and Jerusalem, while government spending for settlers in isolated Israeli settlements is three times the Israeli national average. Most of the spending goes to the security of the Israeli citizens living there.",
"title": "Housing costs and state subventions"
},
{
"paragraph_id": 6,
"text": "As of January 2023, there are 144 Israeli settlements in the West Bank, including 12 in East Jerusalem. In addition, there are over 100 Israeli illegal outposts in the West Bank. In total, over 500,000 Israeli settlers live in the West Bank excluding East Jerusalem, with an additional 220,000 Jewish settlers residing in East Jerusalem.",
"title": "Number of settlements and inhabitants"
},
{
"paragraph_id": 7,
"text": "Additionally, over 20,000 Israeli citizens live in settlements in the Golan Heights.",
"title": "Number of settlements and inhabitants"
},
{
"paragraph_id": 8,
"text": "Following the 1967 Six-Day War, Israel occupied a number of territories. It took over the remainder of the Palestinian Mandate territories of the West Bank including East Jerusalem, from Jordan which had controlled the territories since the 1948 Arab-Israeli war, and the Gaza Strip from Egypt, which had held Gaza under occupation since 1949. From Egypt, it also captured the Sinai Peninsula and from Syria it captured most of the Golan Heights, which since 1981 has been administered under the Golan Heights Law.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "As early as September 1967, Israeli settlement policy was progressively encouraged by the Labor government of Levi Eshkol. The basis for Israeli settlement in the West Bank became the Allon Plan, named after its inventor Yigal Allon. It implied Israeli annexation of major parts of the Israeli-occupied territories, especially East Jerusalem, Gush Etzion and the Jordan Valley. The settlement policy of the government of Yitzhak Rabin was also derived from the Allon Plan.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first settlement was Kfar Etzion, in the southern West Bank, although that location was outside the Allon Plan. Many settlements began as Nahal settlements. They were established as military outposts and later expanded and populated with civilian inhabitants. According to a secret document dating to 1970, obtained by Haaretz, the settlement of Kiryat Arba was established by confiscating land by military order and falsely representing the project as being strictly for military use while in reality, Kiryat Arba was planned for settler use. The method of confiscating land by military order for establishing civilian settlements was an open secret in Israel throughout the 1970s, but publication of the information was suppressed by the military censor.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the 1970s, Israel's methods for seizing Palestinian land to establish settlements included requisitioning for ostensibly military purposes and spraying of land with poison.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Likud government of Menahem Begin, from 1977, was more supportive to settlement in other parts of the West Bank, by organizations like Gush Emunim and the Jewish Agency/World Zionist Organization, and intensified the settlement activities. In a government statement, Likud declared that the entire historic Land of Israel is the inalienable heritage of the Jewish people and that no part of the West Bank should be handed over to foreign rule. Ariel Sharon declared in the same year (1977) that there was a plan to settle 2 million Jews in the West Bank by 2000. The government abrogated the prohibition from purchasing occupied land by Israelis; the \"Drobles Plan\", a plan for large-scale settlement in the West Bank meant to prevent a Palestinian state under the pretext of security became the framework for its policy. The \"Drobles Plan\" from the World Zionist Organization, dated October 1978 and named \"Master Plan for the Development of Settlements in Judea and Samaria, 1979–1983\", was written by the Jewish Agency director and former Knesset member Matityahu Drobles. In January 1981, the government adopted a follow-up plan from Drobles, dated September 1980 and named \"The current state of the settlements in Judea and Samaria\", with more details about settlement strategy and policy.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Since 1967, government-funded settlement projects in the West Bank are implemented by the \"Settlement Division\" of the World Zionist Organization. Though formally a non-governmental organization, it is funded by the Israeli government and leases lands from the Civil Administration to settle in the West Bank. It is authorized to create settlements in the West Bank on lands licensed to it by the Civil Administration. Traditionally, the Settlement Division has been under the responsibility of the Agriculture Ministry. Since the Oslo Accords, it was always housed within the Prime Minister's Office (PMO). In 2007, it was moved back to the Agriculture Ministry. In 2009, the Netanyahu Government decided to subject all settlement activities to additional approval of the Prime Minister and the Defense Minister. In 2011, Netanyahu sought to move the Settlement Division again under the direct control of (his own) PMO, and to curtail Defense Minister Ehud Barak's authority.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "At the presentation of the Oslo II Accord on 5 October 1995 in the Knesset, PM Yitzhak Rabin expounded the Israeli settlement policy in connection with the permanent solution to the conflict. Israel wanted \"a Palestinian entity, less than a state, which will be a home to most of the Palestinian residents living in the Gaza Strip and the West Bank\". It wanted to keep settlements beyond the Green Line including Ma'ale Adumim and Givat Ze'ev in East Jerusalem. Blocs of settlements should be established in the West Bank. Rabin promised not to return to the 4 June 1967 lines.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In June 1997, the Likud government of Benjamin Netanyahu presented its \"Allon Plus Plan\". This plan holds the retention of some 60% of the West Bank, including the \"Greater Jerusalem\" area with the settlements Gush Etzion and Ma'aleh Adumim, other large concentrations of settlements in the West Bank, the entire Jordan Valley, a \"security area\", and a network of Israeli-only bypass roads.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In the Road map for peace of 2002, which was never implemented, the establishment of a Palestinian state was acknowledged. Outposts would be dismantled. However, many new outposts appeared instead, few were removed. Israel's settlement policy remained unchanged. Settlements in East Jerusalem and remaining West Bank were expanded.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "While according to official Israeli policy no new settlements were built, at least some hundred unauthorized outposts were established since 2002 with state funding in the 60% of the West Bank that was not under Palestinian administrative control and the population growth of settlers did not diminish.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 2005, all 21 settlements in the Gaza Strip and four in the northern West Bank were forcibly evacuated as part of Israeli disengagement from the Gaza Strip, known to some in Israel as \"the Expulsion\". However, the disengagement was more than compensated by transfers to the West Bank.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "After the failure of the Roadmap, several new plans emerged to settle in major parts of the West Bank. In 2011, Haaretz revealed the Civil Administration's \"Blue Line\"-plan, written in January 2011, which aims to increase Israeli \"state-ownership\" of West Bank land (\"state lands\") and settlement in strategic areas like the Jordan Valley and the northern Dead Sea area. In March 2012, it was revealed that the Civil Administration over the years covertly allotted 10% of the West Bank for further settlement. Provisional names for future new settlements or settlement expansions were already assigned. The plan includes many Palestinian built-up sites in the Areas A and B.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Some settlements are self-contained cities with a stable population in the tens of thousands, infrastructure, and all other features of permanence. Examples are Beitar Illit (a city of close to 45,000 residents), Ma'ale Adumim, Modi'in Illit, and Ariel (almost 20,000 residents). Some are towns with a local council status with populations of 2,000–20,0000, such as Alfei Menashe, Eli, Elkana, Efrat and Kiryat Arba. There are also clusters of villages governed by a local elected committee and regional councils that are responsible for municipal services. Examples are Kfar Adumim, Neve Daniel, Kfar Tapuach and Ateret. Kibbutzim and moshavim in the territories include Argaman, Gilgal, Niran and Yitav. Jewish neighborhoods have been built on the outskirts of Arab neighborhoods, for example in Hebron. In Jerusalem, there are urban neighborhoods where Jews and Arabs live together: the Muslim Quarter, Silwan, Abu Tor, Sheikh Jarrah and Shimon HaTzadik.",
"title": "Geography and municipal status"
},
{
"paragraph_id": 21,
"text": "Under the Oslo Accords, the West Bank was divided into three separate parts designated as Area A, Area B and Area C. Leaving aside the position of East Jerusalem, all of the settlements are in Area C which comprises about 60% of the West Bank.",
"title": "Geography and municipal status"
},
{
"paragraph_id": 22,
"text": "Some settlements were established on sites where Jewish communities had existed during the British Mandate of Palestine or even since the First Aliyah or ancient times.",
"title": "Resettlement of former Jewish communities"
},
{
"paragraph_id": 23,
"text": "At the end of 2010, 534,224 Jewish Israeli lived in the West Bank, including East Jerusalem. 314,132 of them lived in the 121 authorised settlements and 102 unauthorised settlement outposts on the West Bank, 198,629 were living in East Jerusalem, and almost 20,000 lived in settlements in the Golan Heights.",
"title": "Demographics"
},
{
"paragraph_id": 24,
"text": "By 2011, the number of Jewish settlers in the West Bank excluding East Jerusalem had increased to 328,423 people.",
"title": "Demographics"
},
{
"paragraph_id": 25,
"text": "In June 2014, the number of Israeli settlers in the West Bank excluding East Jerusalem had increased to 382,031 people, with over 20,000 Israeli settlers in the Golan Heights.",
"title": "Demographics"
},
{
"paragraph_id": 26,
"text": "In January 2015, the Israeli Interior Ministry gave figures of 389,250 Israeli citizens living in the West Bank outside East Jerusalem.",
"title": "Demographics"
},
{
"paragraph_id": 27,
"text": "By the end of 2016, the West Bank Jewish population had risen to 420,899, excluding East Jerusalem, where there were more than 200,000 Jews.",
"title": "Demographics"
},
{
"paragraph_id": 28,
"text": "In 2019, the number of Israeli settlers in the West Bank excluding East Jerusalem had risen to 441,600 individuals, and the number of Israeli settlers in the Golan Heights had risen to 25,261.",
"title": "Demographics"
},
{
"paragraph_id": 29,
"text": "In 2020, the number of Israeli settlers in the West Bank excluding East Jerusalem had reportedly risen to 451,700 individuals, with an additional 220,000 Jews living in East Jerusalem.",
"title": "Demographics"
},
{
"paragraph_id": 30,
"text": "Based on various sources, population dispersal can be estimated as follows:",
"title": "Demographics"
},
{
"paragraph_id": 31,
"text": "In addition to internal migration, in large though declining numbers, the settlements absorb annually about 1000 new immigrants from outside Israel. The American Kulanu organization works with such right-wing Israeli settler groups as Amishav and Shavei Israel to settle \"lost\" Jews of color in such areas where local Palestinians are being displaced. In the 1990s, the annual settler population growth was more than three times the annual population growth in Israel. Population growth has continued in the 2000s. According to the BBC, the settlements in the West Bank have been growing at a rate of 5–6% since 2001. In 2016, there were sixty thousand American Israelis living in settlements in the West Bank.",
"title": "Demographics"
},
{
"paragraph_id": 32,
"text": "The establishment of settlements in the Palestinian territories is linked to the displacement of the Palestinian populations as evidenced by a 1979 Security Council Commission which established a link between Israeli settlements and the displacement of the local population. The commission also found that those who remained were under consistent pressure to leave to make room for further settlers who were being encouraged into the area. In conclusion the commission stated that settlement in the Palestinian territories was causing \"profound and irreversible changes of a geographic and demographic nature\".",
"title": "Demographics"
},
{
"paragraph_id": 33,
"text": "The Israeli settlements in the West Bank fall under the administrative district of Judea and Samaria Area. Since December 2007, approval by both the Israeli Prime Minister and Israeli Defense Minister of all settlement activities (including planning) in the West Bank is required. Authority for planning and construction is held by the Israel Defense Forces Civil Administration.",
"title": "Administration and local government"
},
{
"paragraph_id": 34,
"text": "The area consists of four cities, thirteen local councils and six regional councils.",
"title": "Administration and local government"
},
{
"paragraph_id": 35,
"text": "The Yesha Council (Hebrew: מועצת יש\"ע, Moatzat Yesha, a Hebrew acronym for Judea, Samaria and Gaza) is the umbrella organization of municipal councils in the West Bank.",
"title": "Administration and local government"
},
{
"paragraph_id": 36,
"text": "The actual buildings of the Israeli settlements cover only 1 percent of the West Bank, but their jurisdiction and their regional councils extend to about 42 percent of the West Bank, according to the Israeli NGO B'Tselem. Yesha Council chairman Dani Dayan disputes the figures and claims that the settlements only control 9.2 percent of the West Bank.",
"title": "Administration and local government"
},
{
"paragraph_id": 37,
"text": "Between 2001 and 2007 more than 10,000 Israeli settlement units were built, while 91 permits were issued for Palestinian construction, and 1,663 Palestinian structures were demolished in Area C.",
"title": "Administration and local government"
},
{
"paragraph_id": 38,
"text": "West Bank Palestinians have their cases tried in Israel's military courts while Jewish Israeli settlers living in the same occupied territory are tried in civil courts. The arrangement has been described as \"de facto segregation\" by the UN Committee on the Elimination of Racial Discrimination. A bill to formally extend Israeli law to the Israeli settlements in the West Bank was rejected in 2012. The basic military laws governing the West Bank are influenced by what is called the \"pipelining\" of Israeli legislation. As a result of \"enclave law\", large portions of Israeli civil law are applied to Israeli settlements and Israeli residents in the occupied territories.",
"title": "Administration and local government"
},
{
"paragraph_id": 39,
"text": "On 31 August 2014, Israel announced it was appropriating 400 hectares of land in the West Bank to eventually house 1,000 Israel families. The appropriation was described as the largest in more than 30 years. According to reports on Israel Radio, the development is a response to the 2014 kidnapping and murder of Israeli teenagers.",
"title": "Administration and local government"
},
{
"paragraph_id": 40,
"text": "East Jerusalem is defined in the Jerusalem Law of 1980 as part of Israel and its capital, Jerusalem. As such it is administered as part of the city and its district, the Jerusalem District. Pre-1967 residents of East Jerusalem and their descendants have residency status in the city but many have refused Israeli citizenship. Thus, the Israeli government maintains an administrative distinction between Israeli citizens and non-citizens in East Jerusalem, but the Jerusalem municipality does not.",
"title": "Administration and local government"
},
{
"paragraph_id": 41,
"text": "The Golan Heights is administered under Israeli civil law as the Golan sub-district, a part of the Northern District. Israel makes no legal or administrative distinction between pre-1967 communities in the Golan Heights (mainly Druze) and the post-1967 settlements.",
"title": "Administration and local government"
},
{
"paragraph_id": 42,
"text": "After the capture of the Sinai Peninsula from Egypt in the 1967 Six-Day War, settlements were established along the Gulf of Aqaba and in northeast Sinai, just below the Gaza Strip. Israel had plans to expand the settlement of Yamit into a city with a population of 200,000, though the actual population of Yamit did not exceed 3,000. The Sinai Peninsula was returned to Egypt in stages beginning in 1979 as part of the Egypt–Israel peace treaty. As required by the treaty, in 1982 Israel evacuated the Israeli civilian population from the 18 Sinai settlements in Sinai. In some instances evacuations were done forcefully, such as the evacuation of Yamit. All the settlements were then dismantled.",
"title": "Administration and local government"
},
{
"paragraph_id": 43,
"text": "Before Israel's unilateral disengagement plan in which the Israeli settlements were evacuated, there were 21 settlements in the Gaza Strip under the administration of the Hof Aza Regional Council. The land was allocated in such a way that each Israeli settler disposed of 400 times the land available to the Palestinian refugees, and 20 times the volume of water allowed to the peasant farmers of the Strip.",
"title": "Administration and local government"
},
{
"paragraph_id": 44,
"text": "The consensus view in the international community is that the existence of Israeli settlements in the West Bank including East Jerusalem and the Golan Heights is in violation of international law. The Fourth Geneva Convention includes statements such as \"the Occupying Power shall not deport or transfer parts of its own civilian population into the territory it occupies\". On 20 December 2019, International Criminal Court chief prosecutor Fatou Bensouda announced an International Criminal Court investigation in Palestine into alleged war crimes committed during the Israeli–Palestinian conflict. At present, the view of the international community, as reflected in numerous UN resolutions, regards the building and existence of Israeli settlements in the West Bank, East Jerusalem and the Golan Heights as a violation of international law. UN Security Council Resolution 446 refers to the Fourth Geneva Convention as the applicable international legal instrument, and calls upon Israel to desist from transferring its own population into the territories or changing their demographic makeup. The reconvened Conference of the High Contracting Parties to the Geneva Conventions has declared the settlements illegal as has the primary judicial organ of the UN, the International Court of Justice.",
"title": "Legal status"
},
{
"paragraph_id": 45,
"text": "The position of successive Israeli governments is that all authorized settlements are entirely legal and consistent with international law. In practice, Israel does not accept that the Fourth Geneva Convention applies de jure, but has stated that on humanitarian issues it will govern itself de facto by its provisions, without specifying which these are. The scholar and jurist Eugene Rostow has disputed the illegality of authorized settlements.",
"title": "Legal status"
},
{
"paragraph_id": 46,
"text": "Under Israeli law, West Bank settlements must meet specific criteria to be legal. In 2009, there were approximately 100 small communities that did not meet these criteria and are referred to as illegal outposts.",
"title": "Legal status"
},
{
"paragraph_id": 47,
"text": "In 2014 twelve EU countries warned businesses against involving themselves in the settlements. According to the warnings, economic activities relating to the settlements involve legal and economic risks stemming from the fact that the settlements are built on occupied land not recognized as Israel's.",
"title": "Legal status"
},
{
"paragraph_id": 48,
"text": "The consensus of the international community – the vast majority of states, the overwhelming majority of legal experts, the International Court of Justice and the UN, is that settlements are in violation of international law. After the Six-Day War, in 1967, Theodor Meron, legal counsel to the Israeli Foreign Ministry stated in a legal opinion to the Prime Minister,",
"title": "Legal status"
},
{
"paragraph_id": 49,
"text": "\"My conclusion is that civilian settlement in the administered territories contravenes the explicit provisions of the Fourth Geneva Convention.\"",
"title": "Legal status"
},
{
"paragraph_id": 50,
"text": "This legal opinion was sent to Prime Minister Levi Eshkol. However, it was not made public at the time. The Labor cabinet allowed settlements despite the warning. This paved the way for future settlement growth. In 2007, Meron stated that \"I believe that I would have given the same opinion today.\"",
"title": "Legal status"
},
{
"paragraph_id": 51,
"text": "In 1978, the Legal Adviser of the Department of State of the United States reached the same conclusion.",
"title": "Legal status"
},
{
"paragraph_id": 52,
"text": "The International Court of Justice, in its advisory opinion, has since ruled that Israel is in breach of international law by establishing settlements in Occupied Palestinian Territory, including East Jerusalem. The Court maintains that Israel cannot rely on its right of self-defense or necessity to impose a regime that violates international law. The Court also ruled that Israel violates basic human rights by impeding liberty of movement and the inhabitants' right to work, health, education and an adequate standard of living.",
"title": "Legal status"
},
{
"paragraph_id": 53,
"text": "International intergovernmental organizations such as the Conference of the High Contracting Parties to the Fourth Geneva Convention, major organs of the United Nations, the European Union, and Canada, also regard the settlements as a violation of international law. The Committee on the Elimination of Racial Discrimination wrote that \"The status of the settlements was clearly inconsistent with Article 3 of the Convention, which, as noted in the Committee's General Recommendation XIX, prohibited all forms of racial segregation in all countries. There is a consensus among publicists that the prohibition of racial discrimination, irrespective of territories, is an imperative norm of international law.\" Amnesty International, and Human Rights Watch have also characterized the settlements as a violation of international law.",
"title": "Legal status"
},
{
"paragraph_id": 54,
"text": "In late January 2013 a report drafted by three justices, presided over by Christine Chanet, and issued by the United Nations Human Rights Council declared that Jewish settlements constituted a creeping annexation based on multiple violations of the Geneva Conventions and international law, and stated that if Palestine ratified the Rome Accord, Israel could be tried for \"gross violations of human rights law and serious violations of international humanitarian law.\" A spokesman for Israel's Foreign Ministry declared the report 'unfortunate' and accused the UN's Human Rights Council of a \"systematically one-sided and biased approach towards Israel.\"",
"title": "Legal status"
},
{
"paragraph_id": 55,
"text": "The Supreme Court of Israel, with a variety of different justices sitting, has repeatedly stated that Israel's presence in the West Bank is in violation of international law.",
"title": "Legal status"
},
{
"paragraph_id": 56,
"text": "Four prominent jurists cited the concept of the \"sovereignty vacuum\" in the immediate aftermath of the Six-Day War to describe the legal status of the West Bank and Gaza: Yehuda Zvi Blum in 1968, Elihu Lauterpacht in 1968, Julius Stone in 1969 and 1981, and Stephen M. Schwebel in 1970. Eugene V. Rostow also argued in 1979 that the occupied territories' legal status was undetermined.",
"title": "Legal status"
},
{
"paragraph_id": 57,
"text": "Professor Ben Saul took exception to this view, arguing that Article 49(6) can be read to include voluntary or assisted transfers, as indeed it was in the advisory opinion of the International Court of Justice which had expressed this interpretation in the Israeli Wall Advisory Opinion (2003).",
"title": "Legal status"
},
{
"paragraph_id": 58,
"text": "Israel maintains that a temporary use of land and buildings for various purposes is permissible under a plea of military necessity and that the settlements fulfilled security needs. Israel argues that its settlement policy is consistent with international law, including the Fourth Geneva Convention, while recognising that some settlements have been constructed illegally on private land. The Israeli Supreme Court has ruled that the power of the Civil Administration and the Military Commander in the occupied territories is limited by the entrenched customary rules of public international law as codified in the Hague Regulations. In 1998 the Israeli Minister of Foreign Affairs produced \"The International Criminal Court Background Paper\". It concludes",
"title": "Legal status"
},
{
"paragraph_id": 59,
"text": "International law has long recognised that there are crimes of such severity they should be considered \"international crimes.\" Such crimes have been established in treaties such as the Genocide Convention and the Geneva Conventions.... The following are Israel's primary issues of concern [ie with the rules of the ICC]: The inclusion of settlement activity as a \"war crime\" is a cynical attempt to abuse the Court for political ends. The implication that the transfer of civilian population to occupied territories can be classified as a crime equal in gravity to attacks on civilian population centres or mass murder is preposterous and has no basis in international law.",
"title": "Legal status"
},
{
"paragraph_id": 60,
"text": "A UN conference was held in Rome in 1998, where Israel was one of seven countries to vote against the Rome Statute to establish the International Criminal Court. Israel was opposed to a provision that included as a war crime the transfer of civilian populations into territory the government occupies. Israel has signed the statute, but not ratified the treaty.",
"title": "Legal status"
},
{
"paragraph_id": 61,
"text": "A 1996 amendment to an Israeli military order, states that land privately owned can not be part of a settlement, unless the land in question has been confiscated for military purposes. In 2006 Peace Now acquired a report, which it claims was leaked from the Israeli Government's Civil Administration, indicating that up to 40 percent of the land Israel plans to retain in the West Bank is privately owned by Palestinians. Peace Now called this a violation of Israeli law. Peace Now published a comprehensive report about settlements on private lands. In the wake of a legal battle, Peace Now lowered the figure to 32 percent, which the Civil Administration also denied. The Washington Post reported that \"The 38-page report offers what appears to be a comprehensive argument against the Israeli government's contention that it avoids building on private land, drawing on the state's own data to make the case.\"",
"title": "Land ownership"
},
{
"paragraph_id": 62,
"text": "In February 2008, the Civil Administration stated that the land on which more than a third of West Bank settlements was built had been expropriated by the IDF for \"security purposes.\" The unauthorized seizure of private Palestinian land was defined by the Civil Administration itself as 'theft.' According to B'Tselem, more than 42 percent of the West Bank are under control of the Israeli settlements, 21 percent of which was seized from private Palestinian owners, much of it in violation of the 1979 Israeli Supreme Court decision.",
"title": "Land ownership"
},
{
"paragraph_id": 63,
"text": "In 1979, the government decided to extend settlements or build new ones only on \"state lands\".",
"title": "Land ownership"
},
{
"paragraph_id": 64,
"text": "A secret database, drafted by a retired senior officer, Baruch Spiegel, on orders from former defense minister Shaul Mofaz, found that some settlements deemed legal by Israel were illegal outposts, and that large portions of Ofra, Elon Moreh and Beit El were built on private Palestinian land. The \"Spiegel report\" was revealed by Haaretz in 2009. Many settlements are largely built on private lands, without approval of the Israeli Government. According to Israel, the bulk of the land was vacant, was leased from the state, or bought fairly from Palestinian landowners.",
"title": "Land ownership"
},
{
"paragraph_id": 65,
"text": "Invoking the Absentees' Property Laws to transfer, sell or lease property in East Jerusalem owned by Palestinians who live elsewhere without compensation has been criticized both inside and outside of Israel. Opponents of the settlements claim that \"vacant\" land belonged to Arabs who fled or collectively to an entire village, a practice that developed under Ottoman rule. B'Tselem charged that Israel is using the absence of modern legal documents for the communal land as a legal basis for expropriating it. These \"abandoned lands\" are sometimes laundered through a series of fraudulent sales.",
"title": "Land ownership"
},
{
"paragraph_id": 66,
"text": "According to Amira Hass, one of the techniques used by Israel to expropriate Palestinian land is to place desired areas under a 'military firing zone' classification, and then issue orders for the evacuation of Palestinians from the villages in that range, while allowing contiguous Jewish settlements to remain unaffected.",
"title": "Land ownership"
},
{
"paragraph_id": 67,
"text": "Amnesty International argues that Israel's settlement policy is discriminatory and a violation of Palestinian human rights. B'Tselem claims that Israeli travel restrictions impact on Palestinian freedom of movement and Palestinian human rights have been violated in Hebron due to the presence of the settlers within the city. According to B'Tselem, over fifty percent of West Bank land expropriated from Palestinians has been used to establish settlements and create reserves of land for their future expansion. The seized lands mainly benefit the settlements and Palestinians cannot use them. The roads built by Israel in the West Bank to serve the settlements are closed to Palestinian vehicles' and act as a barrier often between villages and the lands on which they subsist.",
"title": "Effects on Palestinian human rights"
},
{
"paragraph_id": 68,
"text": "Human Rights Watch and other human rights observer volunteer regularly file reports on \"settler violence,\" referring to stoning and shooting incidents involving Israeli settlers. Israel's withdrawal from Gaza and Hebron have led to violent settler protests and disputes over land and resources. Meron Benvenisti described the settlement enterprise as a \"commercial real estate project that conscripts Zionist rhetoric for profit.\"",
"title": "Effects on Palestinian human rights"
},
{
"paragraph_id": 69,
"text": "The construction of the Israeli West Bank barrier has been criticized as an infringement on Palestinian human and land rights. The United Nations Office for the Coordination of Humanitarian Affairs estimated that 10% of the West Bank would fall on the Israeli side of the barrier.",
"title": "Effects on Palestinian human rights"
},
{
"paragraph_id": 70,
"text": "In July 2012, the UN Human Rights Council decided to set up a probe into Jewish settlements. The report of the independent international fact-finding mission which investigated the \"implications of the Israeli settlements on the civil, political, economic, social and cultural rights of the Palestinian people throughout the Occupied Palestinian Territory\" was published in February 2013.",
"title": "Effects on Palestinian human rights"
},
{
"paragraph_id": 71,
"text": "In February 2020, the Office of the United Nations High Commissioner for Human Rights published a list of 112 companies linked to activities related to Israeli settlements in the occupied West Bank.",
"title": "Effects on Palestinian human rights"
},
{
"paragraph_id": 72,
"text": "Goods produced in Israeli settlements are able to stay competitive on the global market, in part because of massive state subsidies they receive from the Israeli government. Farmers and producers are given state assistance, while companies that set up in the territories receive tax breaks and direct government subsidies. An Israeli government fund has also been established to help companies pay customs penalties. Palestinian officials estimate that settlers sell goods worth some $500 million to the Palestinian market. Israel has built 16 industrial zones, containing roughly 1000 industrial plants, in the West Bank and East Jerusalem on acreage that consumes large parts of the territory planned for a future Palestinian state. According to Jodi Rudoren these installations both entrench the occupation and provide work for Palestinians, even those opposed to it. The 16 parks are located at Shaked, Beka'ot, Baran, Karnei Shomron, Emmanuel, Barkan, Ariel, Shilo, Halamish, Ma'ale Efraim, Sha'ar Binyamin, Atarot, Mishor Adumim, Gush Etzion, Kiryat Arba and Metarim (2001). In spite of this, the West Bank settlements have failed to develop a self-sustaining local economy. About 60% of the settler workforce commutes to Israel for work. The settlements rely primarily on the labor of their residents in Israel proper rather than local manufacturing, agriculture, or research and development. Of the industrial parks in the settlements, there are only two significant ones, at Ma'ale Adumim and Barkan, with most of the workers there being Palestinian. Only a few hundred settler households cultivate agricultural land, and rely primarily on Palestinian labor in doing so.",
"title": "Economy"
},
{
"paragraph_id": 73,
"text": "Settlement has an economic dimension, much of it driven by the significantly lower costs of housing for Israeli citizens living in Israeli settlements compared to the cost of housing and living in Israel proper. Government spending per citizen in the settlements is double that spent per Israeli citizen in Tel Aviv and Jerusalem, while government spending for settlers in isolated Israeli settlements is three times the Israeli national average. Most of the spending goes to the security of the Israeli citizens living there.",
"title": "Economy"
},
{
"paragraph_id": 74,
"text": "According to Israeli government estimates, $230 million worth of settler goods including fruit, vegetables, cosmetics, textiles and toys are exported to the EU each year, accounting for approximately 2% of all Israeli exports to Europe. A 2013 report of Profundo revealed that at least 38 Dutch companies imported settlement products.",
"title": "Economy"
},
{
"paragraph_id": 75,
"text": "European Union law requires a distinction to be made between goods originating in Israel and those from the occupied territories. The former benefit from preferential custom treatment according to the EU-Israel Association Agreement (2000); the latter don't, having been explicitly excluded from the agreement. In practice, however, settler goods often avoid mandatory customs through being labelled as originating in Israel, while European customs authorities commonly fail to complete obligatory postal code checks of products to ensure they have not originated in the occupied territories.",
"title": "Economy"
},
{
"paragraph_id": 76,
"text": "In 2009, the United Kingdom's Department for the Environment, Food and Rural Affairs issued new guidelines concerning labelling of goods imported from the West Bank. The new guidelines require labelling to clarify whether West Bank products originate from settlements or from the Palestinian economy. Israel's foreign ministry said that the UK was \"catering to the demands of those whose ultimate goal is the boycott of Israeli products\"; but this was denied by the UK government, who said that the aim of the new regulations was to allow consumers to choose for themselves what produce they buy. Denmark has similar legislation requiring food products from settlements in the occupied territories to be accurately labelled. In June 2022, Norway also stated that it would begin complying with EU regulation to label produce originating from Israeli settlements in the West Bank and Golan Heights as such.",
"title": "Economy"
},
{
"paragraph_id": 77,
"text": "On 12 November 2019 the Court of Justice of the European Union in a ruling covering all territory Israel captured in the 1967 war decided that labels on foodstuffs must not imply that goods produced in occupied territory came from Israel itself and must \"prevent consumers from being misled as to the fact that the State of Israel is present in the territories concerned as an occupying power and not as a sovereign entity\". In its ruling, the court said that failing to inform EU consumers they were potentially buying goods produced in settlements denies them access to \"ethical considerations and considerations relating to the observance of international law\".",
"title": "Economy"
},
{
"paragraph_id": 78,
"text": "In January 2019 the Dail (Ireland's lower house) voted in favour, by 78 to 45, of the Control of Economic Activity (Occupied Territories) bill. This piece of legislation prohibits the purchasing of any good and/or service from the Golan Heights, East Jerusalem or West Bank settlements. As of February 2019 the bill has some stages to be completed,once codified, either a five-year jail sentence or fines of up to €250,000 ($284,000) will affect anyone who breaks this law.",
"title": "Economy"
},
{
"paragraph_id": 79,
"text": "A petition under the European Citizens' Initiative, submitted in September 2021, was accepted on 20 February 2022. The petition seeks the adoption of legislation to ban trade with unlawful settlements. The petition requires a million signatures from across the EU and has received support from civil society groups including Human Rights Watch.",
"title": "Economy"
},
{
"paragraph_id": 80,
"text": "A Palestinian report argued in 2011 that settlements have a detrimental effect on the Palestinian economy, equivalent to about 85% of the nominal gross domestic product of Palestine, and that the \"occupation enterprise\" allows the state of Israel and commercial firms to profit from Palestinian natural resources and tourist potential. A 2013 report published by the World Bank analysed the impact that the limited access to Area C lands and resources had on the Palestinian economy. While settlements represent a single axis of control, it is the largest with 68% of the Area C lands reserved for the settlements. The report goes on to calculate that access to the lands and resources of Area C, including the territory in and around settlements, would increase the Palestinian GDP by some $3.5 billion (or 35%) per year.",
"title": "Economy"
},
{
"paragraph_id": 81,
"text": "The Israeli Supreme Court has ruled that Israeli companies are entitled to exploit the West Bank's natural resources for economic gain, and that international law must be \"adapted\" to the \"reality on the ground\" of long-term occupation.",
"title": "Economy"
},
{
"paragraph_id": 82,
"text": "Due to the availability of jobs offering twice the prevailing salary of the West Bank (as of August 2013), as well as high unemployment, tens of thousands of Palestinians work in Israeli settlements. According to the Manufacturers Association of Israel, some 22,000 Palestinians were employed in construction, agriculture, manufacturing and service industries. An Al-Quds University study in 2011 found that 82% of Palestinian workers said they would prefer to not work in Israeli settlements if they had alternative employment in the West Bank.",
"title": "Palestinian labour"
},
{
"paragraph_id": 83,
"text": "Palestinians have been highly involved in the construction of settlements in the West Bank. In 2013, the Palestinian Central Bureau of Statistics released their survey showing that the number of Palestinian workers who are employed by the Jewish settlements increased from 16,000 to 20,000 in the first quarter. The survey also found that Palestinians who work in Israel and the settlements are paid more than twice their salary compared to what they receive from Palestinian employers.",
"title": "Palestinian labour"
},
{
"paragraph_id": 84,
"text": "In 2008, Kav LaOved charged that Palestinians who work in Israeli settlements are not granted basic protections of Israeli labor law. Instead, they are employed under Jordanian labor law, which does not require minimum wage, payment for overtime and other social rights. In 2007, the Supreme Court of Israel ruled that Israeli labor law does apply to Palestinians working in West Bank settlements and applying different rules in the same work place constituted discrimination. The ruling allowed Palestinian workers to file lawsuits in Israeli courts. In 2008, the average sum claimed by such lawsuits stood at 100,000 shekels.",
"title": "Palestinian labour"
},
{
"paragraph_id": 85,
"text": "According to Palestinian Center for Policy and Survey Research, 63% of Palestinians opposed PA plans to prosecute Palestinians who work in the settlements. However, 72% of Palestinians support a boycott of the products they sell. Although the Palestinian Authority has criminalized working in the settlements, the director-general at the Palestinian Ministry of Labor, Samer Salameh, described the situation in February 2014 as being \"caught between two fires\". He said \"We strongly discourage work in the settlements, since the entire enterprise is illegal and illegitimate...but given the high unemployment rate and the lack of alternatives, we do not enforce the law that criminalizes work in the settlements.\"",
"title": "Palestinian labour"
},
{
"paragraph_id": 86,
"text": "Gush Emunim Underground was a militant organization that operated in 1979–1984. The organization planned attacks on Palestinian officials and the Dome of the Rock. In 1994, Baruch Goldstein of Hebron, a member of Kach carried out the Cave of the Patriarchs massacre, killing 29 Muslim worshipers and injuring 125. The attack was widely condemned by the Israeli government and Jewish community. The Palestinian leadership has accused Israel of \"encouraging and enabling\" settler violence in a bid to provoke Palestinian riots and violence in retaliation. Violence perpetrated by Israeli settlers against Palestinians constitutes terrorism according to the U.S. Department of State, and former IDF Head of Central Command Avi Mizrahi stated that such violence constitutes \"terror.\"",
"title": "Violence"
},
{
"paragraph_id": 87,
"text": "In mid-2008, a UN report recorded 222 acts of Israeli settler violence against Palestinians and IDF troops compared with 291 in 2007. This trend reportedly increased in 2009. Maj-Gen Shamni said that the number had risen from a few dozen individuals to hundreds, and called it \"a very grave phenomenon.\" In 2008–2009, the defense establishment adopted a harder line against the extremists. This group responded with a tactic dubbed \"price tagging\", vandalizing Palestinian property whenever police or soldiers were sent in to dismantle outposts. From January through to September 2013, 276 attacks by settlers against Palestinians were recorded.",
"title": "Violence"
},
{
"paragraph_id": 88,
"text": "Leading religious figures in the West Bank have harshly criticized these tactics. Rabbi Menachem Froman of Tekoa said that \"Targeting Palestinians and their property is a shocking thing, ... It's an act of hurting humanity. ... This builds a wall of fire between Jews and Arabs.\" The Yesha Council and Hanan Porat also condemned such actions. Other rabbis have been accused of inciting violence against non-Jews. In response to settler violence, the Israeli government said that it would increase law enforcement and cut off aid to illegal outposts. Some settlers are thought to lash out at Palestinians because they are \"easy victims.\" The United Nations accused Israel of failing to intervene and arrest settlers suspected of violence. In 2008, Haaretz wrote that \"Israeli society has become accustomed to seeing lawbreaking settlers receive special treatment and no other group could similarly attack Israeli law enforcement agencies without being severely punished.\"",
"title": "Violence"
},
{
"paragraph_id": 89,
"text": "In September 2011, settlers vandalized a mosque and an army base. They slashed tires and cut cables of 12 army vehicles and sprayed graffiti. In November 2011, the United Nations Office for Coordination of Human Affairs (OCHA) in the Palestinian territories published a report on settler violence that showed a significant rise compared to 2009 and 2010. The report covered physical violence and property damage such as uprooted olive trees, damaged tractors and slaughtered sheep. The report states that 90% of complaints filed by Palestinians have been closed without charge.",
"title": "Violence"
},
{
"paragraph_id": 90,
"text": "According to EU reports, Israel has created an \"atmosphere of impunity\" for Jewish attackers, which is seen as tantamount to tacit approval by the state. In the West Bank, Jews and Palestinians live under two different legal regimes and it is difficult for Palestinians to lodge complaints, which must be filed in Hebrew in Israeli settlements.",
"title": "Violence"
},
{
"paragraph_id": 91,
"text": "The 27 ministers of foreign affairs of the European Union published a report in May 2012 strongly denouncing policies of the State of Israel in the West Bank and denouncing \"continuous settler violence and deliberate provocations against Palestinian civilians.\" The report by all EU ministers called \"on the government of Israel to bring the perpetrators to justice and to comply with its obligations under international law.\"",
"title": "Violence"
},
{
"paragraph_id": 92,
"text": "In July 2014, a day after the burial of three murdered Israeli teens, Khdeir, a 16-year-old Palestinian, was forced into a car by 3 Israeli settlers on an East Jerusalem street. His family immediately reported the fact to Israeli Police who located his charred body a few hours later at Givat Shaul in the Jerusalem Forest. Preliminary results from the autopsy suggested that he was beaten and burnt while still alive. The murder suspects explained the attack as a response to the June abduction and murder of three Israeli teens. The murders contributed to a breakout of hostilities in the 2014 Israel–Gaza conflict. In July 2015, a similar incident occurred where Israeli settlers made an arson attack on two Palestinian houses, one of which was empty; however, the other was occupied, resulting in the burning to death of a Palestinian infant; the four other members of his family were evacuated to the hospital suffering serious injuries. These two incidents received condemnation from the United States, European Union and the IDF. The European Union criticized Israel for \"failing to protect the Palestinian population\".",
"title": "Violence"
},
{
"paragraph_id": 93,
"text": "While the economy of the Palestinian territories has shown signs of growth, the International Committee of the Red Cross reported that Palestinian olive farming has suffered. According to the ICRC, 10,000 olive trees were cut down or burned by settlers in 2007–2010. Foreign ministry spokesman Yigal Palmor said the report ignored official PA data showing that the economic situation of Palestinians had improved substantially, citing Mahmoud Abbas's comment to The Washington Post in May 2009, where he said \"in the West Bank, we have a good reality, the people are living a normal life.\"",
"title": "Violence"
},
{
"paragraph_id": 94,
"text": "Haaretz blamed the violence during the olive harvest on a handful of extremists. In 2010, trees belonging to both Jews and Arabs were cut down, poisoned or torched. In the first two weeks of the harvest, 500 trees owned by Palestinians and 100 trees owned by Jews had been vandalized. In October 2013, 100 trees were cut down.",
"title": "Violence"
},
{
"paragraph_id": 95,
"text": "Violent attacks on olive trees seem to be facilitated by the apparently systematic refusal of the Israeli authorities to allow Palestinians to visit their own groves, sometimes for years, especially in cases where the groves are deemed to be too close to settlements.",
"title": "Violence"
},
{
"paragraph_id": 96,
"text": "Israeli civilians living in settlements have been targeted by violence from armed Palestinian groups. These groups, according to Human Rights Watch, assert that settlers are \"legitimate targets\" that have \"forfeited their civilian status by residing in settlements that are illegal under international humanitarian law.\" Both Human Rights Watch and B'tselem rejected this argument on the basis that the legal status of the settlements has no effect on the civilian status of their residents. Human Rights Watch said the \"prohibition against intentional attacks against civilians is absolute.\" B'tselem said \"The settlers constitute a distinctly civilian population, which is entitled to all the protections granted civilians by international law. The Israeli security forces' use of land in the settlements or the membership of some settlers in the Israeli security forces does not affect the status of the other residents living among them, and certainly does not make them proper targets of attack.\"",
"title": "Violence"
},
{
"paragraph_id": 97,
"text": "Fatal attacks on settlers have included firing of rockets and mortars and drive-by shootings, also targeting infants and children. Violent incidents include the murder of Shalhevet Pass, a ten-month-old baby shot by a Palestinian sniper in Hebron, and the murder of two teenagers by unknown perpetrators on 8 May 2001, whose bodies were hidden in a cave near Tekoa, a crime that Israeli authorities suggest may have been committed by Palestinian terrorists. In the Bat Ayin axe attack, children in Bat Ayin were attacked by a Palestinian wielding an axe and a knife. A 13-year-old boy was killed and another was seriously wounded. Rabbi Meir Hai, a father of seven, was killed in a drive-by shooting. In August 2011, five members of one family were killed in their beds. The victims were the father Ehud (Udi) Fogel, the mother Ruth Fogel, and three of their six children—Yoav, 11, Elad, 4, and Hadas, the youngest, a three-month-old infant. According to David Ha'ivri, and as reported by multiple sources, the infant was decapitated.",
"title": "Violence"
},
{
"paragraph_id": 98,
"text": "Pro-Palestinian activists who hold regular protests near the settlements have been accused of stone-throwing, physical assault and provocation. In 2008, Avshalom Peled, head of the Israel Police's Hebron district, called \"left-wing\" activity in the city dangerous and provocative, and accused activists of antagonizing the settlers in the hope of getting a reaction.",
"title": "Violence"
},
{
"paragraph_id": 99,
"text": "Municipal Environmental Associations of Judea and Samaria, an environmental awareness group, was established by the settlers to address sewage treatment problems and cooperate with the Palestinian Authority on environmental issues. According to a 2004 report by Friends of the Earth Middle East, settlers account for 10% of the population in the West Bank but produce 25% of the sewage output. Beit Duqqu and Qalqilyah have accused settlers of polluting their farmland and villagers claim children have become ill after swimming in a local stream. Legal action was taken against 14 settlements by the Israeli Ministry of the Environment. The Palestinian Authority has also been criticized by environmentalists for not doing more to prevent water pollution. Settlers and Palestinians share the mountain aquifer as a water source, and both generate sewage and industrial effluents that endanger the aquifer. Friends of the Earth Middle East claimed that sewage treatment was inadequate in both sectors. Sewage from Palestinian sources was estimated at 46 million cubic meters a year, and sources from settler sources at 15 million cubic meters a year. A 2004 study found that sewage was not sufficiently treated in many settlements, while sewage from Palestinian villages and cities flowed into unlined cesspits, streams and the open environment with no treatment at all.",
"title": "Environmental issues"
},
{
"paragraph_id": 100,
"text": "In a 2007 study, the Israel Nature and Parks Authority and Israeli Ministry of Environmental Protection, found that Palestinian towns and cities produced 56 million cubic meters of sewage per year, 94 percent discharged without adequate treatment, while Israeli sources produced 17.5 million cubic meters per year, 31.5 percent without adequate treatment.",
"title": "Environmental issues"
},
{
"paragraph_id": 101,
"text": "According to Palestinian environmentalists, the settlers operate industrial and manufacturing plants that can create pollution as many do not conform to Israeli standards. In 2005, an old quarry between Kedumim and Nablus was slated for conversion into an industrial waste dump. Pollution experts warned that the dump would threaten Palestinian water sources.",
"title": "Environmental issues"
},
{
"paragraph_id": 102,
"text": "The Consortium for Applied Research on International Migration (CARIM) has reported in their 2011 migration profile for Palestine that the reasons for individuals to leave the country are similar to those of other countries in the region and they attribute less importance to the specific political situation of the occupied Palestinian territory. Human Rights Watch in 2010 reported that Israeli settlement policies have had the effect of \"forcing residents to leave their communities\".",
"title": "Impact on Palestinian demographics"
},
{
"paragraph_id": 103,
"text": "In 2008, Condoleezza Rice suggested sending Palestinian refugees to South America, which might reduce pressure on Israel to withdraw from the settlements. Sushil P. Seth speculates that Israelis seem to feel that increasing settlements will force many Palestinians to flee to other countries and that the remainder will be forced to live under Israeli terms. Speaking anonymously with regard to Israeli policies in the South Hebron Hills, a UN expert said that the Israeli crackdown on alternative energy infrastructures like solar panels is part of a deliberate strategy in Area C.",
"title": "Impact on Palestinian demographics"
},
{
"paragraph_id": 104,
"text": "\"From December 2010 to April 2011, we saw a systematic targeting of the water infrastructure in Hebron, Bethlehem and the Jordan valley. Now, in the last couple of months, they are targeting electricity. Two villages in the area have had their electrical poles demolished. There is this systematic effort by the civil administration targeting all Palestinian infrastructure in Hebron. They are hoping that by making it miserable enough, they [the Palestinians] will pick up and leave.\"",
"title": "Impact on Palestinian demographics"
},
{
"paragraph_id": 105,
"text": "Approximately 1,500 people in 16 communities are dependent on energy produced by these installations duct business are threatened with work stoppage orders from the Israeli administration on their installation of alternative power infrastructure, and demolition orders expected to follow will darken the homes of 500 people.",
"title": "Impact on Palestinian demographics"
},
{
"paragraph_id": 106,
"text": "Ariel University, formerly the College of Judea and Samaria, is the major Israeli institution of higher education in the West Bank. With close to 13,000 students, it is Israel's largest public college. The college was accredited in 1994 and awards bachelor's degrees in arts, sciences, technology, architecture and physical therapy. On 17 July 2012, the Council for Higher Education in Judea and Samaria voted to grant the institution full university status.",
"title": "Educational institutions"
},
{
"paragraph_id": 107,
"text": "Teacher training colleges include Herzog College in Alon Shvut and Orot Israel College in Elkana. Ohalo College is located in Katzrin, in the Golan Heights. Curricula at these institutions are overseen by the Council for Higher Education in Judea and Samaria (CHE-JS).",
"title": "Educational institutions"
},
{
"paragraph_id": 108,
"text": "In March 2012, The Shomron Regional Council was awarded the Israeli Ministry of Education's first prize National Education Award in recognizing its excellence in investing substantial resources in the educational system. The Shomron Regional Council achieved the highest marks in all parameters (9.28 / 10). Gershon Mesika, the head of the regional council, declared that the award was a certificate of honour of its educators and the settlement youth who proved their quality and excellence.",
"title": "Educational institutions"
},
{
"paragraph_id": 109,
"text": "In 1983 an Israeli government plan entitled \"Master Plan and Development Plan for Settlement in Samaria and Judea\" envisaged placing a \"maximally large Jewish population\" in priority areas to accomplish incorporation of the West Bank in the Israeli \"national system\". According to Ariel Sharon, strategic settlement locations would work to preclude the formation of a Palestinian state.",
"title": "Strategic significance"
},
{
"paragraph_id": 110,
"text": "Palestinians argue that the policy of settlements constitutes an effort to preempt or sabotage a peace treaty that includes Palestinian sovereignty, and claim that the presence of settlements harm the ability to have a viable and contiguous state. This was also the view of the Israeli Vice Prime Minister Haim Ramon in 2008, saying \"the pressure to enlarge Ofra and other settlements does not stem from a housing shortage, but rather is an attempt to undermine any chance of reaching an agreement with the Palestinians ...\"",
"title": "Strategic significance"
},
{
"paragraph_id": 111,
"text": "The Israel Foreign Ministry asserts that some settlements are legitimate, as they took shape when there was no operative diplomatic arrangement, and thus they did not violate any agreement. Based on this, they assert that:",
"title": "Strategic significance"
},
{
"paragraph_id": 112,
"text": "An early evacuation took place in 1982 as part of the Egypt–Israel peace treaty, when Israel was required to evacuate its settlers from the 18 Sinai settlements. Arab parties to the conflict had demanded the dismantlement of the settlements as a condition for peace with Israel. The evacuation was carried out with force in some instances, for example in Yamit. The settlements were demolished, as it was feared that settlers might try to return to their homes after the evacuation.",
"title": "Dismantling of settlements"
},
{
"paragraph_id": 113,
"text": "Israel's unilateral disengagement plan took place in 2005. It involved the evacuation of settlements in the Gaza Strip and part of the West Bank, including all 21 settlements in Gaza and four in the West Bank, while retaining control over Gaza's borders, coastline, and airspace. Most of these settlements had existed since the early 1980s, some were over 30 years old; the total population involved was more than 10,000. There was significant opposition to the plan among parts of the Israeli public, and especially those living in the territories. George W. Bush said that a permanent peace deal would have to reflect \"demographic realities\" in the West Bank regarding Israel's settlements.",
"title": "Dismantling of settlements"
},
{
"paragraph_id": 114,
"text": "Within the former settlements, almost all buildings were demolished by Israel, with the exception of certain government and religious structures, which were completely emptied. Under an international arrangement, productive greenhouses were left to assist the Palestinian economy but about 30% of these were destroyed within hours by Palestinian looters. Following the withdrawal, many of the former synagogues were torched and destroyed by Palestinians.",
"title": "Dismantling of settlements"
},
{
"paragraph_id": 115,
"text": "Some believe that settlements need not necessarily be dismantled and evacuated, even if Israel withdraws from the territory where they stand, as they can remain under Palestinian rule. These ideas have been expressed both by left-wing Israelis, and by Palestinians who advocate the two-state solution, and by extreme Israeli right-wingers and settlers who object to any dismantling and claim links to the land that are stronger than the political boundaries of the state of Israel.",
"title": "Dismantling of settlements"
},
{
"paragraph_id": 116,
"text": "The Israeli government has often threatened to dismantle outposts. Some have actually been dismantled, occasionally with use of force; this led to settler violence.",
"title": "Dismantling of settlements"
},
{
"paragraph_id": 117,
"text": "American refusal to declare the settlements illegal was said to be the determining factor in the 2011 attempt to declare Palestinian statehood at the United Nations, the so-called Palestine 194 initiative.",
"title": "Palestinian statehood bid of 2011"
},
{
"paragraph_id": 118,
"text": "Israel announced additional settlements in response to the Palestinian diplomatic initiative and Germany responded by moving to stop deliveries to Israel of submarines capable of carrying nuclear weapons.",
"title": "Palestinian statehood bid of 2011"
},
{
"paragraph_id": 119,
"text": "Finally in 2012, several European states switched to either abstain or vote for statehood in response to continued settlement construction. Israel approved further settlements in response to the vote, which brought further worldwide condemnation.",
"title": "Palestinian statehood bid of 2011"
},
{
"paragraph_id": 120,
"text": "The settlements have been a source of tension between Israel and the U.S. Jimmy Carter regarded the settlements as illegal and tactically unwise. Ronald Reagan stated that they were legal but an obstacle to negotiations. In 1991, the U.S. delayed a subsidized loan to pressure Israel on the subject of settlement-building in the Jerusalem-Bethlehem corridor. In 2005, U.S. declared support for \"the retention by Israel of major Israeli population centers as an outcome of negotiations,\" reflecting the statement by George W. Bush that a permanent peace treaty would have to reflect \"demographic realities\" in the West Bank. In June 2009, Barack Obama said that the United States \"does not accept the legitimacy of continued Israeli settlements.\"",
"title": "Impact on peace process"
},
{
"paragraph_id": 121,
"text": "Palestinians claim that Israel has undermined the Oslo accords and peace process by continuing to expand the settlements. Settlements in the Sinai Peninsula were evacuated and razed in the wake of the peace agreement with Egypt. The 27 ministers of foreign affairs of the European Union published a report in May 2012 strongly denouncing policies of the State of Israel in the West Bank and finding that Israeli settlements in the West Bank are illegal and \"threaten to make a two-state solution impossible.\" In the framework of the Oslo I Accord of 1993 between the Israeli government and the Palestine Liberation Organization (PLO), a modus vivendi was reached whereby both parties agreed to postpone a final solution on the destination of the settlements to the permanent status negotiations (Article V.3). Israel claims that settlements thereby were not prohibited, since there is no explicit interim provision prohibiting continued settlement construction, the agreement does register an undertaking by both sides, namely that \"Neither side shall initiate or take any step that will change the status of the West Bank and the Gaza Strip pending the outcome of the permanent status negotiations\" (Article XXX1 (7)), which has been interpreted as, not forbidding settlements, but imposing severe restrictions on new settlement building after that date. Melanie Jacques argued in this context that even 'agreements between Israel and the Palestinians which would allow settlements in the OPT, or simply tolerate them pending a settlement of the conflict, violate the Fourth Geneva Convention.'",
"title": "Impact on peace process"
},
{
"paragraph_id": 122,
"text": "Final status proposals have called for retaining long-established communities along the Green Line and transferring the same amount of land in Israel to the Palestinian state. The Clinton administration proposed that Israel keep some settlements in the West Bank, especially those in large blocs near the pre-1967 borders of Israel, with the Palestinians receiving concessions of land in other parts of the country. Both Clinton and Tony Blair pointed out the need for territorial and diplomatic compromise based on the validity of some of the claims of both sides.",
"title": "Impact on peace process"
},
{
"paragraph_id": 123,
"text": "As Minister of Defense, Ehud Barak approved a plan requiring security commitments in exchange for withdrawal from the West Bank. Barak also expressed readiness to cede parts of East Jerusalem and put the holy sites in the city under a \"special regime.\"",
"title": "Impact on peace process"
},
{
"paragraph_id": 124,
"text": "On 14 June 2009, Israeli Prime Minister Benjamin Netanyahu, as an answer to U.S. President Barack Obama's speech in Cairo, delivered a speech setting out his principles for a Palestinian-Israeli peace, among others, he alleged \"... we have no intention of building new settlements or of expropriating additional land for existing settlements.\" In March 2010, the Netanyahu government announced plans for building 1,600 housing units in Ramat Shlomo across the Green Line in East Jerusalem during U.S. Vice President Joe Biden's visit to Israel causing a diplomatic row.",
"title": "Impact on peace process"
},
{
"paragraph_id": 125,
"text": "On 6 September 2010, Jordanian King Abdullah II and Syrian President Bashar al-Assad said that Israel would need to withdraw from all of the lands occupied in 1967 in order to achieve peace with the Palestinians.",
"title": "Impact on peace process"
},
{
"paragraph_id": 126,
"text": "Bradley Burston has said that a negotiated or unilateral withdraw from most of the settlements in the West Bank is gaining traction in Israel.",
"title": "Impact on peace process"
},
{
"paragraph_id": 127,
"text": "In November 2010, the United States offered to \"fight against efforts to delegitimize Israel\" and provide extra arms to Israel in exchange for a continuation of the settlement freeze and a final peace agreement, but failed to come to an agreement with the Israelis on the exact terms.",
"title": "Impact on peace process"
},
{
"paragraph_id": 128,
"text": "In December 2010, the United States criticised efforts by the Palestinian Authority to impose borders for the two states through the United Nations rather than through direct negotiations between the two sides. In February 2011, it vetoed a draft resolution to condemn all Jewish settlements established in the occupied Palestinian territory since 1967 as illegal. The resolution, which was supported by all other Security Council members and co-sponsored by nearly 120 nations, would have demanded that \"Israel, as the occupying power, immediately and completely ceases all settlement activities in the occupied Palestinian territory, including East Jerusalem and that it fully respect its legal obligations in this regard.\" The U.S. representative said that while it agreed that the settlements were illegal, the resolution would harm chances for negotiations. Israel's deputy Foreign Minister, Daniel Ayalon, said that the \"UN serves as a rubber stamp for the Arab countries and, as such, the General Assembly has an automatic majority,\" and that the vote \"proved that the United States is the only country capable of advancing the peace process and the only righteous one speaking the truth: that direct talks between Israel and the Palestinians are required.\" Palestinian negotiators, however, have refused to resume direct talks until Israel ceases all settlement activity.",
"title": "Impact on peace process"
},
{
"paragraph_id": 129,
"text": "In November 2009, Israeli Prime Minister Netanyahu issued a 10-month settlement freeze in the West Bank in an attempt to restart negotiations with the Palestinians. The freeze did not apply to building in Jerusalem in areas across the green line, housing already under construction and existing construction described as \"essential for normal life in the settlements\" such as synagogues, schools, kindergartens and public buildings. The Palestinians refused to negotiate without a complete halt to construction. In the face of pressure from the United States and most world powers supporting the demand by the Palestinian Authority that Israel desist from settlement project in 2010, Israel's ambassador to the UN Meron Reuben said Israel would only stop settlement construction after a peace agreement is concluded, and expressed concern were Arab countries to press for UN recognition of a Palestinian state before such an accord. He cited Israel's dismantlement of settlements in both the Sinai which took place after a peace agreement, and its unilateral dismantlement of settlements in the Gaza Strip. He presumed that settlements would stop being built were Palestinians to establish a state in a given area.",
"title": "Impact on peace process"
},
{
"paragraph_id": 130,
"text": "The Clinton Parameters, a 2000 peace proposal by then U.S. President Bill Clinton, included a plan on which the Palestinian State was to include 94–96% of the West Bank, and around 80% of the settlers were to be under Israeli sovereignty, and in exchange for that, Israel will concede some territory (so called 'Territory Exchange' or 'Land Swap') within the Green Line (1967 borders). The swap would consist of 1–3% of Israeli territory, such that the final borders of the West Bank part of the Palestinian state would include 97% of the land of the original borders.",
"title": "Impact on peace process"
},
{
"paragraph_id": 131,
"text": "In 2010, Palestinian Authority President Mahmoud Abbas said that the Palestinians and Israel have agreed on the principle of a land swap. The issue of the ratio of land Israel would give to the Palestinians in exchange for keeping settlement blocs is an issue of dispute, with the Palestinians demanding that the ratio be 1:1, and Israel insisting that other factors be considered as well.",
"title": "Impact on peace process"
},
{
"paragraph_id": 132,
"text": "Under any peace deal with the Palestinians, Israel intends to keep the major settlement blocs close to its borders, which contain over 80% of the settlers. Prime Ministers Yitzhak Rabin, Ariel Sharon, and Benjamin Netanyahu have all stated Israel's intent to keep such blocs under any peace agreement. U.S. President George W. Bush acknowledged that such areas should be annexed to Israel in a 2004 letter to Prime Minister Sharon.",
"title": "Impact on peace process"
},
{
"paragraph_id": 133,
"text": "The European Union position is that any annexation of settlements should be done as part of mutually agreed land swaps, which would see the Palestinians controlling territory equivalent to the territory captured in 1967. The EU says that it will not recognise any changes to the 1967 borders without an agreement between the parties.",
"title": "Impact on peace process"
},
{
"paragraph_id": 134,
"text": "Israeli Foreign Minister Avigdor Lieberman has proposed a plan which would see settlement blocs annexed to Israel in exchange for heavily Arab areas inside Israel as part of a population exchange.",
"title": "Impact on peace process"
},
{
"paragraph_id": 135,
"text": "According to Mitchell G. Bard: \"Ultimately, Israel may decide to unilaterally disengage from the West Bank and determine which settlements it will incorporate within the borders it delineates. Israel would prefer, however, to negotiate a peace treaty with the Palestinians that would specify which Jewish communities will remain intact within the mutually agreed border of Israel, and which will need to be evacuated. Israel will undoubtedly insist that some or all of the \"consensus\" blocs become part of Israel\".",
"title": "Impact on peace process"
},
{
"paragraph_id": 136,
"text": "A number of proposals for the granting of Palestinian citizenship or residential permits to Jewish settlers in return for the removal of Israeli military installations from the West Bank have been fielded by such individuals as Arafat, Ibrahim Sarsur and Ahmed Qurei. In contrast, Mahmoud Abbas said in July 2013 that \"In a final resolution, we would not see the presence of a single Israeli—civilian or soldier—on our lands.\"",
"title": "Impact on peace process"
},
{
"paragraph_id": 137,
"text": "Israeli Minister Moshe Ya'alon said in April 2010 that \"just as Arabs live in Israel, so, too, should Jews be able to live in Palestine.\" ... \"If we are talking about coexistence and peace, why the [Palestinian] insistence that the territory they receive be ethnically cleansed of Jews?\".",
"title": "Impact on peace process"
},
{
"paragraph_id": 138,
"text": "The idea has been expressed by both advocates of the two-state solution and supporters of the settlers and conservative or fundamentalist currents in Israeli Judaism that, while objecting to any withdrawal, claim stronger links to the land than to the State of Israel.",
"title": "Impact on peace process"
},
{
"paragraph_id": 139,
"text": "On 19 June 2011, Haaretz reported that the Israeli cabinet voted to revoke Defense Minister Ehud Barak's authority to veto new settlement construction in the West Bank, by transferring this authority from the Agriculture Ministry, headed by Barak ally Orit Noked, to the Prime Minister's office.",
"title": "Settlement expansion"
},
{
"paragraph_id": 140,
"text": "In 2009, newly elected Prime Minister Benjamin Netanyahu said: \"I have no intention of building new settlements in the West Bank... But like all the governments there have been until now, I will have to meet the needs of natural growth in the population. I will not be able to choke the settlements.\" On 15 October 2009, he said the settlement row with the United States had been resolved.",
"title": "Settlement expansion"
},
{
"paragraph_id": 141,
"text": "In April 2012, four illegal outposts were retroactively legalized by the Israeli government. In June 2012, the Netanyahu government announced a plan to build 851 homes in five settlements: 300 units in Beit El and 551 units in other settlements.",
"title": "Settlement expansion"
},
{
"paragraph_id": 142,
"text": "Amid peace negotiations that showed little signs of progress, Israel issued on 3 November 2013, tenders for 1,700 new homes for Jewish settlers. The plots were offered in nine settlements in areas Israel says it intends to keep in any peace deal with the Palestinians. On 12 November, Peace Now revealed that the Construction and Housing Ministry had issued tenders for 24,000 more settler homes in the West Bank, including 4,000 in East Jerusalem. 2,500 units were planned in Ma'aleh Adumim, some 9,000 in the Gush Etzion Region, and circa 12,000 in the Binyamin Region, including 1,200 homes in the E1 area in addition to 3,000 homes in previously frozen E1 projects. Circa 15,000 homes of the 24,000 plan would be east of the West Bank Barrier and create the first new settlement blocs for two decades, and the first blocs ever outside the Barrier, far inside the West Bank.",
"title": "Settlement expansion"
},
{
"paragraph_id": 143,
"text": "As stated before, the Israeli government (as of 2015) has a program of residential subsidies in which Israeli settlers receive about double that given to Israelis in Tel Aviv and Jerusalem. As well, settlers in isolated areas receive three times the Israeli national average. From the beginning of 2009 to the end of 2013, the Israeli settlement population as a whole increased by a rate of over 4% per year. A New York Times article in 2015 stated that said building had been \"at the heart of mounting European criticism of Israel.\"",
"title": "Settlement expansion"
},
{
"paragraph_id": 144,
"text": "United Nations Security Council Resolution 2334 \"Requests the Secretary-General to report to the Council every three months on the implementation of the provisions of the present resolution;\" In the first of these reports, delivered verbally at a security council meeting on 24 March 2017, United Nations Special Coordinator for the Middle East Peace Process, Nickolay Mladenov, noted that Resolution 2334 called on Israel to take steps to cease all settlement activity in the Occupied Palestinian Territory, that \"no such steps have been taken during the reporting period\" and that instead, there had been a marked increase in statements, announcements and decisions related to construction and expansion.",
"title": "Settlement expansion"
},
{
"paragraph_id": 145,
"text": "The 2017 Settlement Regularization in \"Judea and Samaria\" Law permits backdated legalization of outposts constructed on private Palestinian land. Following a petition challenging its legality, on June 9, 2020, Israel's Supreme Court struck down the law that had retroactively legalized about 4,000 settler homes built on privately owned Palestinian land. The Israeli Attorney General has stated that existing laws already allow legalization of Israeli constructions on private Palestinian land in the West Bank. The Israeli Attorney General, Avichai Mandelblit, has updated the High Court on his official approval of the use of a legal tactic permitting the de facto legalization of roughly 2,000 illegally built Israeli homes throughout the West Bank. The legal mechanism is known as \"market regulation\" and relies on the notion that wildcat Israeli homes built on private Palestinian land were done so in good faith.",
"title": "Settlement expansion"
},
{
"paragraph_id": 146,
"text": "In a report of 22 July 2019, PeaceNow notes that after a gap of 6 years when there were no new outposts, establishment of new outposts recommenced in 2012, with 32 of the current 126 outposts set up to date. 2 outposts were subject to eviction, 15 were legalized and at least 35 are in process of legalization.",
"title": "Settlement expansion"
},
{
"paragraph_id": 147,
"text": "The Israeli government announced in 2019 that it has made monetary grants available for the construction of hotels in Area C of the West Bank.",
"title": "Settlement expansion"
},
{
"paragraph_id": 148,
"text": "According to Peace Now, approvals for building in Israeli settlements in East Jerusalem expanded by 60% between 2017, when Donald Trump became US president, and 2019.",
"title": "Settlement expansion"
},
{
"paragraph_id": 149,
"text": "On 9 July 2021, Michael Lynk, U.N. special rapporteur on human rights in the occupied Palestinian territory, addressing a session of the UN Human Rights Council in Geneva, said \"I conclude that the Israeli settlements do amount to a war crime,\" and \"I submit to you that this finding compels the international community...to make it clear to Israel that its illegal occupation, and its defiance of international law and international opinion, can and will no longer be cost-free.\" Israel, which does not recognize Lynk's mandate, boycotted the session.",
"title": "Settlement expansion"
},
{
"paragraph_id": 150,
"text": "A new Israeli government, formed on 13 June 2021, declared a \"status quo\" in the settlements policy. According to Peace Now, as of 28 October this has not been the case. On October 24, 2021, tenders were published for 1,355 housing units plus another 83 in Givat HaMatos and on 27 October 2021, approval was given for 3,000 housing units including in settlements deep inside the West Bank. These developments were condemned by the U.S. as well as by the United Kingdom, Russia and 12 European countries. while UN experts, Michael Lynk, Special Rapporteur on the situation of human rights in the Palestinian Territory occupied since 1967 and Mr. Balakrishnan Rajagopal (United States of America), UN Special Rapporteur on adequate housing said that settlement expansion should be treated as a \"presumptive war crime\".",
"title": "Settlement expansion"
},
{
"paragraph_id": 151,
"text": "In February 2023, the new Israeli government under Benjamin Netanyahu approved the legalization of nine illegal settler outposts in the West Bank. Finance Minister Bezalel Smotrich took charge of most of the Civil Administration, obtaining broad authority over civilian issues in the West Bank. In March 2023, Netanyahu's government repealed a 2005 law whereby four Israeli settlements, Homesh, Sa-Nur, Ganim and Kadim, were dismantled as part of the Israeli disengagement from Gaza. In June 2023, Israel shortened the procedure of approving settlement construction and gave Finance Minister Smotrich the authority to approve one of the stages, changing the system operating for the last 27 years. In its first six months, construction of 13,000 housing units in settlements, almost triple the amount advanced in the whole of 2022.",
"title": "Settlement expansion"
}
]
| Israeli settlements or colonies are civilian communities where Israeli citizens live, almost exclusively of Jewish identity or ethnicity, built on lands occupied by Israel since the Six-Day War in 1967. The international community considers Israeli settlements to be illegal under international law, though Israel disputes this. Israeli settlements currently exist in the West Bank, claimed by the State of Palestine as its sovereign territory, and in the Golan Heights, which is internationally considered Syrian territory. East Jerusalem and the Golan Heights have been effectively annexed by Israel, though the international community has rejected any change of status and considers each occupied territory. Although the West Bank settlements are on land administered under Israeli military rule rather than civil law, Israeli civil law is "pipelined" into the settlements, such that Israeli citizens living there are treated similarly to those living in Israel. In the West Bank, Israel continues to expand its remaining settlements as well as settling new areas, despite pressure from the international community to desist. The international community regards both territories as held under Israeli occupation and the localities established there to be illegal settlements. The International Court of Justice found the settlements to be illegal in its 2004 advisory opinion on the West Bank barrier. As of January 2023, there are 144 Israeli settlements in the West Bank, including 12 in East Jerusalem. There are over 100 Israeli illegal outposts in the West Bank. In total, over 450,000 Israeli settlers live in the West Bank excluding East Jerusalem, with an additional 220,000 Jewish settlers residing in East Jerusalem. Additionally, over 25,000 Israeli settlers live in the Golan Heights. Israeli settlements had previously been built within the Egyptian territory of the Sinai Peninsula, and within the Palestinian territory of the Gaza Strip; however, Israel evacuated and dismantled the 18 Sinai settlements following the 1979 Egypt–Israel peace agreement and all of the 21 settlements in the Gaza Strip, along with four in the West Bank, in 2005 as part of its unilateral disengagement from Gaza. The transfer by an occupying power of its civilian population into the territory it occupies is a war crime, although Israel disputes that this applies to the West Bank. On 20 December 2019, the International Criminal Court announced an International Criminal Court investigation in Palestine into alleged war crimes. The presence and ongoing expansion of existing settlements by Israel and the construction of settlement outposts is frequently criticized as an obstacle to the Israeli–Palestinian peace process by the Palestinians, and third parties such as the OIC, the United Nations, Russia, the United Kingdom, France, and the European Union have echoed those criticisms. The international community considers the settlements to be illegal under international law, and the United Nations has repeatedly upheld the view that Israel's construction of settlements constitutes a violation of the Fourth Geneva Convention. The United States for decades considered the settlements to be "illegitimate", until the Trump administration in November 2019 shifted its position, declaring "the establishment of Israeli civilian settlements in the West Bank is not per se inconsistent with international law." | 2001-10-07T11:06:55Z | 2024-01-01T00:24:41Z | [
"Template:Cite SSRN",
"Template:Multiple image",
"Template:See also",
"Template:Webarchive",
"Template:Cite journal",
"Template:Short description",
"Template:Lang-he",
"Template:Sfn",
"Template:Main",
"Template:Bsn",
"Template:Cite book",
"Template:Dead link",
"Template:Reflist",
"Template:Cite web",
"Template:Isbn",
"Template:Doi",
"Template:Israel populations",
"Template:Weasel inline",
"Template:Further",
"Template:Portal",
"Template:Pp-30-500",
"Template:Use dmy dates",
"Template:Commons category",
"Template:Zionism and the Land of Israel",
"Template:Lead too long",
"Template:Cite news",
"Template:Cite magazine",
"Template:Other uses",
"Template:Efn",
"Template:Notelist",
"Template:ISBN",
"Template:Cbignore",
"Template:Zionism",
"Template:Main articles",
"Template:As of",
"Template:Citation needed",
"Template:Note"
]
| https://en.wikipedia.org/wiki/Israeli_settlement |
15,125 | Irrealism (the arts) | Irrealism is a term that has been used by various writers in the fields of philosophy, literature, and art to denote specific modes of unreality and/or the problems in concretely defining reality. While in philosophy the term specifically refers to a position put forward by the American philosopher Nelson Goodman, in literature and art it refers to a variety of writers and movements. If the term has nonetheless retained a certain consistency in its use across these fields and would-be movements, it perhaps reflects the word's position in general English usage: though the standard dictionary definition of irreal gives it the same meaning as unreal, irreal is very rarely used in comparison with unreal. Thus, it has generally been used to describe something which, while unreal, is so in a very specific or unusual fashion, usually one emphasizing not just the "not real," but some form of estrangement from our generally accepted sense of reality.
In literature, the term irrealism was first used extensively in the United States in the 1970s to describe the post-realist "new fiction" of writers such as Donald Barthelme or John Barth. More generally, it described the notion that all forms of writing could only "offer particular versions of reality rather than actual descriptions of it," and that a story need not offer a clear resolution at its end. John Gardner, in The Art of Fiction, cites in this context the work of Barthelme and its "seemingly limitless ability to manipulate [literary] techniques as modes of apprehension [which] apprehend nothing." Though Barth, in a 1974 interview, stated, "irrealism—not antirealism or unrealism, but irrealism—is all that I would confidently predict is likely to characterize the prose fiction of the 1970s," this did not prove to be the case. Instead writing in the United States quickly returned to its realist orthodoxy and the term irrealism fell into disuse.
In recent years, however, the term has been revived in an attempt to describe and categorize, in literary and philosophical terms, how it is that the work of an irrealist writer differs from the work of writers in other, non-realistic genres (e.g., the fantasy of J.R.R. Tolkien, the magical realism of Gabriel García Márquez) and what the significance of this difference is. This can be seen in Dean Swinford's essay Defining irrealism: scientific development and allegorical possibility. Approaching the issue from a structuralist and narratological point of view, he has defined irrealism as a "peculiar mode of postmodern allegory" that has resulted from modernity's fragmentation and dismantling of the well-ordered and coherent medieval system of symbol and allegory. Thus a lion, when presented in a given context in medieval literature, could only be interpreted in a single, approved way. Contemporary literary theory, however, denies the attribution of such fixed meanings. According to Swinford, this change can be attributed in part to the fact that "science and technical culture have changed perceptions of the natural world, have significantly changed the natural world itself, thereby altering the vocabulary of symbols applicable to epistemological and allegorical attempts to understand it." Thus irreal works such as Italo Calvino's Cosmicomics and Jorge Luis Borges' Ficciones can be seen as an attempt to find a new allegorical language to explain our changed perceptions of the world that have been brought about by our scientific and technical culture, especially concepts such as quantum physics or the theory of relativity. "The Irrealist work, then, operates within a given system," writes Swinford, "and attests to its plausibility, despite the fact that this system, and the world it represents, is often a mutation, an aberration."
The online journal The Cafe Irreal , on the other hand, has defined irrealism as being a type of existentialist literature in which the means are continually and absurdly rebelling against the ends that we have determined for them. An example of this would be Franz Kafka's story The Metamorphosis, in which the salesman Gregor Samsa's plans for supporting his family and rising up in rank by hard work and determination are suddenly thrown topsy-turvy by his sudden and inexplicable transformation into a man-sized insect. Such fiction is said to emphasize the fact that human consciousness, being finite in nature, can never make complete sense of, or successfully order, a universe that is infinite in its aspects and possibilities. Which is to say: as much as we might try to order our world with a certain set of norms and goals (which we consider our real world), the paradox of a finite consciousness in an infinite universe creates a zone of irreality ("that which is beyond the real") that offsets, opposes, or threatens the real world of the human subject. Irrealist writing often highlights this irreality, and our strange fascination with it, by combining the unease we feel because the real world doesn't conform to our desires with the narrative quality of the dream state (where reality is constantly and inexplicably being undermined); it is thus said to communicate directly, "by feeling rather than articulation, the uncertainties inherent in human existence or, to put it another way... the irreconcilability between human aspiration and human reality." If the irreal story can be considered an allegory, then, it would be an allegory that is "so many pointers to an unknown meaning," in which the meaning is felt more than it is articulated or systematically analyzed.
Various writers have addressed the question of Irrealism in Art. Many salient observations on Irrealism in Art are found in Nelson Goodman's Languages of Art. Goodman himself produced some multimedia shows, one of which inspired by hockey and is entitled Hockey Seen: A Nightmare in Three Periods and Sudden Death.
Garret Rowlan, writing in The Cafe Irreal, writes that the malaise present in the work of the Italian artist Giorgio de Chirico, "which recalls Kafka, has to do with the sense of another world lurking, hovering like the long shadows that dominate de Chirico's paintings, which frequently depict a landscape at twilight's uncertain hour. Malaise and mystery are all by-products of the interaction of the real and the unreal, the rub and contact of two worlds caught on irrealism's shimmering surface."
The writer Dean Swinford, whose concept of irrealism was described at length in the section "Irrealism in Literature", wrote that the artist Remedios Varos, in her painting The Juggler, "creates a personal allegorical system which relies on the predetermined symbols of Christian and classical iconography. But these are quickly refigured into a personal system informed by the scientific and organized like a machine...in the Irreal work, allegory operates according to an altered, but constant and orderly iconographic system."
Artist Tristan Tondino claims "There is no specific style to Irrealist Art. It is the result of awareness that every human act is the result of the limitations of the world of the actor."
In Australia, the art journal the art life has recently detected the presence of a "New Irrealism" among the painters of that country, which is described as being an "approach to painting that is decidedly low key, deploying its effects without histrionic showmanship, while creating an eerie other world of ghostly images and abstract washes." What exactly constituted the "old" irrealism, they do not say.
Irrealist Art Edition is a publishing company created in the 90s by contemporary plastic artist Frédéric Iriarte. Together with the Estonian poet, writer and art critic Ilmar Laaban, they developed their concept of Irrealism through several essays, exhibitions, projects, manifest and a book, "Irréalisation". Irrealist Art Edition ISBN 91-630-2304-0 | [
{
"paragraph_id": 0,
"text": "Irrealism is a term that has been used by various writers in the fields of philosophy, literature, and art to denote specific modes of unreality and/or the problems in concretely defining reality. While in philosophy the term specifically refers to a position put forward by the American philosopher Nelson Goodman, in literature and art it refers to a variety of writers and movements. If the term has nonetheless retained a certain consistency in its use across these fields and would-be movements, it perhaps reflects the word's position in general English usage: though the standard dictionary definition of irreal gives it the same meaning as unreal, irreal is very rarely used in comparison with unreal. Thus, it has generally been used to describe something which, while unreal, is so in a very specific or unusual fashion, usually one emphasizing not just the \"not real,\" but some form of estrangement from our generally accepted sense of reality.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In literature, the term irrealism was first used extensively in the United States in the 1970s to describe the post-realist \"new fiction\" of writers such as Donald Barthelme or John Barth. More generally, it described the notion that all forms of writing could only \"offer particular versions of reality rather than actual descriptions of it,\" and that a story need not offer a clear resolution at its end. John Gardner, in The Art of Fiction, cites in this context the work of Barthelme and its \"seemingly limitless ability to manipulate [literary] techniques as modes of apprehension [which] apprehend nothing.\" Though Barth, in a 1974 interview, stated, \"irrealism—not antirealism or unrealism, but irrealism—is all that I would confidently predict is likely to characterize the prose fiction of the 1970s,\" this did not prove to be the case. Instead writing in the United States quickly returned to its realist orthodoxy and the term irrealism fell into disuse.",
"title": "Irrealism in literature"
},
{
"paragraph_id": 2,
"text": "In recent years, however, the term has been revived in an attempt to describe and categorize, in literary and philosophical terms, how it is that the work of an irrealist writer differs from the work of writers in other, non-realistic genres (e.g., the fantasy of J.R.R. Tolkien, the magical realism of Gabriel García Márquez) and what the significance of this difference is. This can be seen in Dean Swinford's essay Defining irrealism: scientific development and allegorical possibility. Approaching the issue from a structuralist and narratological point of view, he has defined irrealism as a \"peculiar mode of postmodern allegory\" that has resulted from modernity's fragmentation and dismantling of the well-ordered and coherent medieval system of symbol and allegory. Thus a lion, when presented in a given context in medieval literature, could only be interpreted in a single, approved way. Contemporary literary theory, however, denies the attribution of such fixed meanings. According to Swinford, this change can be attributed in part to the fact that \"science and technical culture have changed perceptions of the natural world, have significantly changed the natural world itself, thereby altering the vocabulary of symbols applicable to epistemological and allegorical attempts to understand it.\" Thus irreal works such as Italo Calvino's Cosmicomics and Jorge Luis Borges' Ficciones can be seen as an attempt to find a new allegorical language to explain our changed perceptions of the world that have been brought about by our scientific and technical culture, especially concepts such as quantum physics or the theory of relativity. \"The Irrealist work, then, operates within a given system,\" writes Swinford, \"and attests to its plausibility, despite the fact that this system, and the world it represents, is often a mutation, an aberration.\"",
"title": "Irrealism in literature"
},
{
"paragraph_id": 3,
"text": "The online journal The Cafe Irreal , on the other hand, has defined irrealism as being a type of existentialist literature in which the means are continually and absurdly rebelling against the ends that we have determined for them. An example of this would be Franz Kafka's story The Metamorphosis, in which the salesman Gregor Samsa's plans for supporting his family and rising up in rank by hard work and determination are suddenly thrown topsy-turvy by his sudden and inexplicable transformation into a man-sized insect. Such fiction is said to emphasize the fact that human consciousness, being finite in nature, can never make complete sense of, or successfully order, a universe that is infinite in its aspects and possibilities. Which is to say: as much as we might try to order our world with a certain set of norms and goals (which we consider our real world), the paradox of a finite consciousness in an infinite universe creates a zone of irreality (\"that which is beyond the real\") that offsets, opposes, or threatens the real world of the human subject. Irrealist writing often highlights this irreality, and our strange fascination with it, by combining the unease we feel because the real world doesn't conform to our desires with the narrative quality of the dream state (where reality is constantly and inexplicably being undermined); it is thus said to communicate directly, \"by feeling rather than articulation, the uncertainties inherent in human existence or, to put it another way... the irreconcilability between human aspiration and human reality.\" If the irreal story can be considered an allegory, then, it would be an allegory that is \"so many pointers to an unknown meaning,\" in which the meaning is felt more than it is articulated or systematically analyzed.",
"title": "Irrealism in literature"
},
{
"paragraph_id": 4,
"text": "Various writers have addressed the question of Irrealism in Art. Many salient observations on Irrealism in Art are found in Nelson Goodman's Languages of Art. Goodman himself produced some multimedia shows, one of which inspired by hockey and is entitled Hockey Seen: A Nightmare in Three Periods and Sudden Death.",
"title": "Irrealism in art"
},
{
"paragraph_id": 5,
"text": "Garret Rowlan, writing in The Cafe Irreal, writes that the malaise present in the work of the Italian artist Giorgio de Chirico, \"which recalls Kafka, has to do with the sense of another world lurking, hovering like the long shadows that dominate de Chirico's paintings, which frequently depict a landscape at twilight's uncertain hour. Malaise and mystery are all by-products of the interaction of the real and the unreal, the rub and contact of two worlds caught on irrealism's shimmering surface.\"",
"title": "Irrealism in art"
},
{
"paragraph_id": 6,
"text": "The writer Dean Swinford, whose concept of irrealism was described at length in the section \"Irrealism in Literature\", wrote that the artist Remedios Varos, in her painting The Juggler, \"creates a personal allegorical system which relies on the predetermined symbols of Christian and classical iconography. But these are quickly refigured into a personal system informed by the scientific and organized like a machine...in the Irreal work, allegory operates according to an altered, but constant and orderly iconographic system.\"",
"title": "Irrealism in art"
},
{
"paragraph_id": 7,
"text": "Artist Tristan Tondino claims \"There is no specific style to Irrealist Art. It is the result of awareness that every human act is the result of the limitations of the world of the actor.\"",
"title": "Irrealism in art"
},
{
"paragraph_id": 8,
"text": "In Australia, the art journal the art life has recently detected the presence of a \"New Irrealism\" among the painters of that country, which is described as being an \"approach to painting that is decidedly low key, deploying its effects without histrionic showmanship, while creating an eerie other world of ghostly images and abstract washes.\" What exactly constituted the \"old\" irrealism, they do not say.",
"title": "Irrealism in art"
},
{
"paragraph_id": 9,
"text": "Irrealist Art Edition is a publishing company created in the 90s by contemporary plastic artist Frédéric Iriarte. Together with the Estonian poet, writer and art critic Ilmar Laaban, they developed their concept of Irrealism through several essays, exhibitions, projects, manifest and a book, \"Irréalisation\". Irrealist Art Edition ISBN 91-630-2304-0",
"title": "Irrealist Art, Film and Music Edition"
}
]
| Irrealism is a term that has been used by various writers in the fields of philosophy, literature, and art to denote specific modes of unreality and/or the problems in concretely defining reality. While in philosophy the term specifically refers to a position put forward by the American philosopher Nelson Goodman, in literature and art it refers to a variety of writers and movements. If the term has nonetheless retained a certain consistency in its use across these fields and would-be movements, it perhaps reflects the word's position in general English usage: though the standard dictionary definition of irreal gives it the same meaning as unreal, irreal is very rarely used in comparison with unreal. Thus, it has generally been used to describe something which, while unreal, is so in a very specific or unusual fashion, usually one emphasizing not just the "not real," but some form of estrangement from our generally accepted sense of reality. | 2001-10-08T13:45:58Z | 2023-12-10T16:50:42Z | [
"Template:ISBN",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Irrealism_(the_arts) |
15,129 | You have two cows | "You have two cows" is a political analogy and form of early 20th century American political satire to describe various economic systems of government. The setup of a typical joke of this kind is the assumption that the listener lives within a given system and has two cows, a very relatable occupation across countries and national boundaries. The punch line is what happens to the listener and the cows in the system; it offers a brief and humorous take on the subject or locale.
A newer variant of the joke cycle compares different peoples and countries.
An article in The Modern Language Journal lists the following classical ones:
Bill Sherk mentions that such lists circulated throughout the United States since around 1936 under the title "Parable of the Isms". A column in The Chicago Daily Tribune in 1938 attributes a version involving socialism, communism, fascism and New Dealism to an address by Silas Strawn to the Economic Club of Chicago on 29 November 1935.
Richard M Steers and Luciara Nardon in their book about global economy use the "two cows" metaphor to illustrate the concept of cultural differences. They write that jokes of the kind are considered funny because they are "realistic but exaggerated caricatures" of various cultures, and the pervasiveness of such jokes stems from the significant cultural differences. Steers and Nardon also state that others believe such jokes present cultural stereotypes and must be viewed with caution.
Jokes of this genre formed the base of a monologue by American comedian Pat Paulsen on The Smothers Brothers Comedy Hour in the late 1960s. Satirising the satire, he appended this comment to capitalism: "...Then put both of them in your wife's name and declare bankruptcy." This material was later used as an element of his satirical US presidential campaign in 1968, and was included on his 1968 comedy album Pat Paulsen for President.
The economics of the Enron scandal have been a target of the "two cows" joke, often describing the accounting fraud that took place in Enron's finances. Much of the beginning of the joke when used to describe Enron resembles the following:
Enronism: You have two cows. You sell three of them to your publicly listed company, using letters of credit opened by your brother-in-law at the bank, then execute a debt/equity swap with an associated general offer so that you get all four cows back, with a tax exemption for five cows. The milk rights of the six cows are transferred via an intermediary to a Cayman Island company secretly owned by your CFO who sells the rights to all seven cows back to your listed company. The annual report says the company owns eight cows, with an option on six more.
The ending of the joke varies in most interactions. The magazine Wired in 2008 ended the joke with Enron selling one cow to buy a new president of the United States, that no balance sheet was provided with the annual report, and ultimately the public buying Enron's bull. In 2002, Power Engineering ended the joke by announcing Enron would start trading cows online using the platform COW (cows on web). | [
{
"paragraph_id": 0,
"text": "\"You have two cows\" is a political analogy and form of early 20th century American political satire to describe various economic systems of government. The setup of a typical joke of this kind is the assumption that the listener lives within a given system and has two cows, a very relatable occupation across countries and national boundaries. The punch line is what happens to the listener and the cows in the system; it offers a brief and humorous take on the subject or locale.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A newer variant of the joke cycle compares different peoples and countries.",
"title": ""
},
{
"paragraph_id": 2,
"text": "An article in The Modern Language Journal lists the following classical ones:",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Bill Sherk mentions that such lists circulated throughout the United States since around 1936 under the title \"Parable of the Isms\". A column in The Chicago Daily Tribune in 1938 attributes a version involving socialism, communism, fascism and New Dealism to an address by Silas Strawn to the Economic Club of Chicago on 29 November 1935.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Richard M Steers and Luciara Nardon in their book about global economy use the \"two cows\" metaphor to illustrate the concept of cultural differences. They write that jokes of the kind are considered funny because they are \"realistic but exaggerated caricatures\" of various cultures, and the pervasiveness of such jokes stems from the significant cultural differences. Steers and Nardon also state that others believe such jokes present cultural stereotypes and must be viewed with caution.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Jokes of this genre formed the base of a monologue by American comedian Pat Paulsen on The Smothers Brothers Comedy Hour in the late 1960s. Satirising the satire, he appended this comment to capitalism: \"...Then put both of them in your wife's name and declare bankruptcy.\" This material was later used as an element of his satirical US presidential campaign in 1968, and was included on his 1968 comedy album Pat Paulsen for President.",
"title": "Notable variants"
},
{
"paragraph_id": 6,
"text": "The economics of the Enron scandal have been a target of the \"two cows\" joke, often describing the accounting fraud that took place in Enron's finances. Much of the beginning of the joke when used to describe Enron resembles the following:",
"title": "Notable variants"
},
{
"paragraph_id": 7,
"text": "Enronism: You have two cows. You sell three of them to your publicly listed company, using letters of credit opened by your brother-in-law at the bank, then execute a debt/equity swap with an associated general offer so that you get all four cows back, with a tax exemption for five cows. The milk rights of the six cows are transferred via an intermediary to a Cayman Island company secretly owned by your CFO who sells the rights to all seven cows back to your listed company. The annual report says the company owns eight cows, with an option on six more.",
"title": "Notable variants"
},
{
"paragraph_id": 8,
"text": "The ending of the joke varies in most interactions. The magazine Wired in 2008 ended the joke with Enron selling one cow to buy a new president of the United States, that no balance sheet was provided with the annual report, and ultimately the public buying Enron's bull. In 2002, Power Engineering ended the joke by announcing Enron would start trading cows online using the platform COW (cows on web).",
"title": "Notable variants"
}
]
| "You have two cows" is a political analogy and form of early 20th century American political satire to describe various economic systems of government. The setup of a typical joke of this kind is the assumption that the listener lives within a given system and has two cows, a very relatable occupation across countries and national boundaries. The punch line is what happens to the listener and the cows in the system; it offers a brief and humorous take on the subject or locale. A newer variant of the joke cycle compares different peoples and countries. | 2001-10-09T07:56:11Z | 2023-11-24T23:43:41Z | [
"Template:Cite journal",
"Template:ISBN",
"Template:Cite news",
"Template:Cite web",
"Template:Cite magazine",
"Template:Short description",
"Template:Block quote",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/You_have_two_cows |
15,134 | Lightbulb joke | A lightbulb joke is a joke cycle that asks how many people of a certain group are needed to change, replace, or screw in a light bulb. Generally, the punch line answer highlights a stereotype of the target group. There are numerous versions of the lightbulb joke satirizing a wide range of cultures, beliefs, and occupations.
Early versions of the joke, popular in the late 1960s and the 1970s, were used to insult the intelligence of people, especially Poles ("Polish jokes"). For instance:
Q. How many Polacks does it take to change a light bulb? A. Three—one to hold the light bulb and two to turn the ladder.
Although lightbulb jokes tend to be derogatory in tone (e.g., "How many drunkards..." / "Four: one to hold the light bulb and three to drink until the room spins"), the people targeted by them may take pride in the stereotypes expressed and are often themselves the jokes' originators, as in "How many Germans does it take to change a lightbulb? One, we're very efficient but not funny." where the joke itself becomes a statement of ethnic pride. Lightbulb jokes applied to subgroups can be used to ease tensions between them.
Some versions of the joke are puns on the words "change" or "screw", or "light":
Q. How many psychiatrists does it take to change a light bulb? A. None—the light bulb will change when it's ready.
Q. How many flies does it take to screw in a lightbulb? A. Two, but don't ask me how they got in there.
Q. How many hands does it take to change a lightbulb? A. Many.
Lightbulb jokes are often responses to contemporary events. For example, the lightbulb may not need to be changed at all due to ongoing power outages.
The Village Voice held a $200 lightbulb joke contest around the time of the Iran hostage crisis, with the winning joke being:
Q. How many Iranians does it take to change a light bulb? A. You send us the prize money and we'll tell you the answer.
Lightbulb jokes can also be about sports, teasing about their team's past, future, etc.
Q. How many Liverpool fans does it take to screw in a lightbulb? A. They don't, they just talk about how good the old one was.
Lightbulb jokes can also be related to religious groups and denominations.
Q. Why is it easier for a Pentecostal to change a light bulb? A. Because their hands are already up. | [
{
"paragraph_id": 0,
"text": "A lightbulb joke is a joke cycle that asks how many people of a certain group are needed to change, replace, or screw in a light bulb. Generally, the punch line answer highlights a stereotype of the target group. There are numerous versions of the lightbulb joke satirizing a wide range of cultures, beliefs, and occupations.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Early versions of the joke, popular in the late 1960s and the 1970s, were used to insult the intelligence of people, especially Poles (\"Polish jokes\"). For instance:",
"title": ""
},
{
"paragraph_id": 2,
"text": "Q. How many Polacks does it take to change a light bulb? A. Three—one to hold the light bulb and two to turn the ladder.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although lightbulb jokes tend to be derogatory in tone (e.g., \"How many drunkards...\" / \"Four: one to hold the light bulb and three to drink until the room spins\"), the people targeted by them may take pride in the stereotypes expressed and are often themselves the jokes' originators, as in \"How many Germans does it take to change a lightbulb? One, we're very efficient but not funny.\" where the joke itself becomes a statement of ethnic pride. Lightbulb jokes applied to subgroups can be used to ease tensions between them.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Some versions of the joke are puns on the words \"change\" or \"screw\", or \"light\":",
"title": "Variations"
},
{
"paragraph_id": 5,
"text": "Q. How many psychiatrists does it take to change a light bulb? A. None—the light bulb will change when it's ready.",
"title": "Variations"
},
{
"paragraph_id": 6,
"text": "Q. How many flies does it take to screw in a lightbulb? A. Two, but don't ask me how they got in there.",
"title": "Variations"
},
{
"paragraph_id": 7,
"text": "Q. How many hands does it take to change a lightbulb? A. Many.",
"title": "Variations"
},
{
"paragraph_id": 8,
"text": "Lightbulb jokes are often responses to contemporary events. For example, the lightbulb may not need to be changed at all due to ongoing power outages.",
"title": "Variations"
},
{
"paragraph_id": 9,
"text": "The Village Voice held a $200 lightbulb joke contest around the time of the Iran hostage crisis, with the winning joke being:",
"title": "Variations"
},
{
"paragraph_id": 10,
"text": "Q. How many Iranians does it take to change a light bulb? A. You send us the prize money and we'll tell you the answer.",
"title": "Variations"
},
{
"paragraph_id": 11,
"text": "Lightbulb jokes can also be about sports, teasing about their team's past, future, etc.",
"title": "Variations"
},
{
"paragraph_id": 12,
"text": "Q. How many Liverpool fans does it take to screw in a lightbulb? A. They don't, they just talk about how good the old one was.",
"title": "Variations"
},
{
"paragraph_id": 13,
"text": "Lightbulb jokes can also be related to religious groups and denominations.",
"title": "Variations"
},
{
"paragraph_id": 14,
"text": "Q. Why is it easier for a Pentecostal to change a light bulb? A. Because their hands are already up.",
"title": "Variations"
}
]
| A lightbulb joke is a joke cycle that asks how many people of a certain group are needed to change, replace, or screw in a light bulb. Generally, the punch line answer highlights a stereotype of the target group. There are numerous versions of the lightbulb joke satirizing a wide range of cultures, beliefs, and occupations. Early versions of the joke, popular in the late 1960s and the 1970s, were used to insult the intelligence of people, especially Poles. For instance: Although lightbulb jokes tend to be derogatory in tone, the people targeted by them may take pride in the stereotypes expressed and are often themselves the jokes' originators, as in "How many Germans does it take to change a lightbulb? One, we're very efficient but not funny." where the joke itself becomes a statement of ethnic pride. Lightbulb jokes applied to subgroups can be used to ease tensions between them. | 2001-10-19T22:51:35Z | 2023-12-19T11:47:53Z | [
"Template:Block indent",
"Template:Reflist",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite book",
"Template:Authority control",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/Lightbulb_joke |
15,144 | International Electrotechnical Commission | The International Electrotechnical Commission (IEC; French: Commission électrotechnique internationale) is an international standards organization that prepares and publishes international standards for all electrical, electronic and related technologies – collectively known as "electrotechnology". IEC standards cover a vast range of technologies from power generation, transmission and distribution to home appliances and office equipment, semiconductors, fibre optics, batteries, solar energy, nanotechnology and marine energy as well as many others. The IEC also manages four global conformity assessment systems that certify whether equipment, system or components conform to its international standards.
All electrotechnologies are covered by IEC Standards, including energy production and distribution, electronics, magnetics and electromagnetics, electroacoustics, multimedia, telecommunication and medical technology, as well as associated general disciplines such as terminology and symbols, electromagnetic compatibility, measurement and performance, dependability, design and development, safety and the environment.
The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, held in Paris. At that time the International System of Electrical and Magnetic Units was agreed to.
The International Electrotechnical Commission held its inaugural meeting on 26 June 1906, following discussions among the British Institution of Electrical Engineers, the American Institute of Electrical Engineers, and others, which began at the 1900 Paris International Electrical Congress,, with British engineer R. E. B. Crompton playing a key role. In 1906, Lord Kelvin was elected as the first President of the International Electrotechnical Commission.
The IEC was instrumental in developing and distributing standards for units of measurement, particularly the gauss, hertz, and weber. It was also first to promote the Giorgi System of standards, later developed into the SI, or Système International d'unités (in English, the International System of Units).
In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical, electronic and related technologies. This effort continues, and the International Electrotechnical Vocabulary is published online as the Electropedia.
The CISPR (Comité International Spécial des Perturbations Radioélectriques) – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC.
Currently, 89 countries are IEC members while another 85 participate in the Affiliate Country Programme, which is not a form of membership but is designed to help industrializing countries get involved with the IEC. Originally located in London, the IEC moved to its current headquarters in Geneva, Switzerland in 1948.
It has regional centres in Africa (Nairobi, Kenya), Asia (Singapore), Oceania (Sydney, Australia), Latin America (São Paulo, Brazil) and North America (Worcester, Massachusetts, United States).
The work is done by some 10,000 electrical and electronics experts from industry, government, academia, test labs and others with an interest in the subject.
IEC Standards are often adopted as national standards by its members.
The IEC cooperates closely with the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU). In addition, it works with several major standards development organizations, including the IEEE with which it signed a cooperation agreement in 2002, which was amended in 2008 to include joint development work.
IEC Standards that are not jointly developed with ISO have numbers in the range 60000–79999 and their titles take a form such as IEC 60417: Graphical symbols for use on equipment. Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC 27 became IEC 60027. Standards of the 60000 series are also found preceded by EN to indicate that the IEC standard is also adopted by CENELEC as a European standard; for example IEC 60034 is also available as EN 60034.
Standards developed jointly with ISO, such as ISO/IEC 26300 (Open Document Format for Office Applications (OpenDocument) v1.0), ISO/IEC 27001 (Information technology, Security techniques, Information security management systems, Requirements), and ISO/IEC 17000 series, carry the acronym of both organizations. The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 – Information Technology, as well as conformity assessment standards developed by ISO CASCO (Committee on conformity assessment) and IEC CAB (Conformity Assessment Board). Other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045–1.
IEC Standards are also being adopted by other certifying bodies such as BSI (United Kingdom), CSA (Canada), UL & ANSI/INCITS (United States), SABS (South Africa), Standards Australia, SPC/GB (China) and DIN (Germany). IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard.
The IEC is made up of members, called national committees, and each NC represents its nation's electrotechnical interests in the IEC. This includes manufacturers, providers, distributors and vendors, consumers and users, all levels of governmental agencies, professional societies and trade associations as well as standards developers from national standards bodies. National committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, and some are private sector only. About 90% of those who prepare IEC standards work in industry. IEC Member countries include:
In 2001 and in response to calls from the WTO to open itself to more developing nations, the IEC launched the Affiliate Country Programme to encourage developing nations to become involved in the commission's work or to use its International Standards. Countries signing a pledge to participate in the work and to encourage the use of IEC Standards in national standards and regulations are granted access to a limited number of technical committee documents for the purposes of commenting. In addition, they can select a limited number of IEC Standards for their national standards' library. Countries participating in the Affiliate Country Programme are: | [
{
"paragraph_id": 0,
"text": "The International Electrotechnical Commission (IEC; French: Commission électrotechnique internationale) is an international standards organization that prepares and publishes international standards for all electrical, electronic and related technologies – collectively known as \"electrotechnology\". IEC standards cover a vast range of technologies from power generation, transmission and distribution to home appliances and office equipment, semiconductors, fibre optics, batteries, solar energy, nanotechnology and marine energy as well as many others. The IEC also manages four global conformity assessment systems that certify whether equipment, system or components conform to its international standards.",
"title": ""
},
{
"paragraph_id": 1,
"text": "All electrotechnologies are covered by IEC Standards, including energy production and distribution, electronics, magnetics and electromagnetics, electroacoustics, multimedia, telecommunication and medical technology, as well as associated general disciplines such as terminology and symbols, electromagnetic compatibility, measurement and performance, dependability, design and development, safety and the environment.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, held in Paris. At that time the International System of Electrical and Magnetic Units was agreed to.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The International Electrotechnical Commission held its inaugural meeting on 26 June 1906, following discussions among the British Institution of Electrical Engineers, the American Institute of Electrical Engineers, and others, which began at the 1900 Paris International Electrical Congress,, with British engineer R. E. B. Crompton playing a key role. In 1906, Lord Kelvin was elected as the first President of the International Electrotechnical Commission.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The IEC was instrumental in developing and distributing standards for units of measurement, particularly the gauss, hertz, and weber. It was also first to promote the Giorgi System of standards, later developed into the SI, or Système International d'unités (in English, the International System of Units).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical, electronic and related technologies. This effort continues, and the International Electrotechnical Vocabulary is published online as the Electropedia.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The CISPR (Comité International Spécial des Perturbations Radioélectriques) – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Currently, 89 countries are IEC members while another 85 participate in the Affiliate Country Programme, which is not a form of membership but is designed to help industrializing countries get involved with the IEC. Originally located in London, the IEC moved to its current headquarters in Geneva, Switzerland in 1948.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "It has regional centres in Africa (Nairobi, Kenya), Asia (Singapore), Oceania (Sydney, Australia), Latin America (São Paulo, Brazil) and North America (Worcester, Massachusetts, United States).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The work is done by some 10,000 electrical and electronics experts from industry, government, academia, test labs and others with an interest in the subject.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "IEC Standards are often adopted as national standards by its members.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The IEC cooperates closely with the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU). In addition, it works with several major standards development organizations, including the IEEE with which it signed a cooperation agreement in 2002, which was amended in 2008 to include joint development work.",
"title": "IEC Standards"
},
{
"paragraph_id": 12,
"text": "IEC Standards that are not jointly developed with ISO have numbers in the range 60000–79999 and their titles take a form such as IEC 60417: Graphical symbols for use on equipment. Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC 27 became IEC 60027. Standards of the 60000 series are also found preceded by EN to indicate that the IEC standard is also adopted by CENELEC as a European standard; for example IEC 60034 is also available as EN 60034.",
"title": "IEC Standards"
},
{
"paragraph_id": 13,
"text": "Standards developed jointly with ISO, such as ISO/IEC 26300 (Open Document Format for Office Applications (OpenDocument) v1.0), ISO/IEC 27001 (Information technology, Security techniques, Information security management systems, Requirements), and ISO/IEC 17000 series, carry the acronym of both organizations. The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 – Information Technology, as well as conformity assessment standards developed by ISO CASCO (Committee on conformity assessment) and IEC CAB (Conformity Assessment Board). Other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045–1.",
"title": "IEC Standards"
},
{
"paragraph_id": 14,
"text": "IEC Standards are also being adopted by other certifying bodies such as BSI (United Kingdom), CSA (Canada), UL & ANSI/INCITS (United States), SABS (South Africa), Standards Australia, SPC/GB (China) and DIN (Germany). IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard.",
"title": "IEC Standards"
},
{
"paragraph_id": 15,
"text": "The IEC is made up of members, called national committees, and each NC represents its nation's electrotechnical interests in the IEC. This includes manufacturers, providers, distributors and vendors, consumers and users, all levels of governmental agencies, professional societies and trade associations as well as standards developers from national standards bodies. National committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, and some are private sector only. About 90% of those who prepare IEC standards work in industry. IEC Member countries include:",
"title": "Membership and participation"
},
{
"paragraph_id": 16,
"text": "In 2001 and in response to calls from the WTO to open itself to more developing nations, the IEC launched the Affiliate Country Programme to encourage developing nations to become involved in the commission's work or to use its International Standards. Countries signing a pledge to participate in the work and to encourage the use of IEC Standards in national standards and regulations are granted access to a limited number of technical committee documents for the purposes of commenting. In addition, they can select a limited number of IEC Standards for their national standards' library. Countries participating in the Affiliate Country Programme are:",
"title": "Membership and participation"
}
]
| The International Electrotechnical Commission is an international standards organization that prepares and publishes international standards for all electrical, electronic and related technologies – collectively known as "electrotechnology". IEC standards cover a vast range of technologies from power generation, transmission and distribution to home appliances and office equipment, semiconductors, fibre optics, batteries, solar energy, nanotechnology and marine energy as well as many others. The IEC also manages four global conformity assessment systems that certify whether equipment, system or components conform to its international standards. All electrotechnologies are covered by IEC Standards, including energy production and distribution, electronics, magnetics and electromagnetics, electroacoustics, multimedia, telecommunication and medical technology, as well as associated general disciplines such as terminology and symbols, electromagnetic compatibility, measurement and performance, dependability, design and development, safety and the environment. | 2002-02-25T15:51:15Z | 2023-11-17T01:04:36Z | [
"Template:Short description",
"Template:Use Oxford spelling",
"Template:Webarchive",
"Template:Col div end",
"Template:Reflist",
"Template:Citation",
"Template:International Electrotechnical Commission",
"Template:Redirect",
"Template:Infobox organization",
"Template:Citation needed",
"Template:Cite book",
"Template:Lang-fr",
"Template:Legend",
"Template:Cite web",
"Template:Col div",
"Template:Flag",
"Template:ISBN",
"Template:Commons category",
"Template:Official website",
"Template:Primary sources",
"User:RMCD bot/subject notice",
"Template:See also",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/International_Electrotechnical_Commission |
15,145 | ISO 9660 | ISO 9660 (also known as ECMA-119) is a file system for optical disc media. The file system is an international standard available from the International Organization for Standardization (ISO). Since the specification is available for anybody to purchase, implementations have been written for many operating systems.
ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020.
The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set (a header block which describes the subsequent layout) and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry.
There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more).
Compact discs were originally developed for recording musical data, but soon were used for storing additional digital data types because they were equally effective for archival mass data storage. Called CD-ROMs, the lowest level format for these type of compact discs was defined in the Yellow Book specification in 1983. However, this book did not define any format for organizing data on CD-ROMs into logical units such as files, which led to every CD-ROM maker creating its own format. In order to develop a CD-ROM file system standard (Z39.60 - Volume and File Structure of CDROM for Information Interchange), the National Information Standards Organization (NISO) set up Standards Committee SC EE (Compact Disc Data Format) in July 1985. In September/ October 1985 several companies invited experts to participate in the development of a working paper for such a standard.
In November 1985, representatives of computer hardware manufacturers gathered at the High Sierra Hotel and Casino (currently called the Hard Rock Hotel and Casino) near Lake Tahoe, California. This group became known as the High Sierra Group (HSG). Present at the meeting were representatives from Apple Computer, AT&T, Digital Equipment Corporation (DEC), Hitachi, LaserData, Microware, Microsoft, 3M, Philips, Reference Technology Inc., Sony Corporation, TMS Inc., VideoTools (later Meridian), Xebec, and Yelick. The meeting report evolved from the Yellow Book CD-ROM standard, which was so open ended it was leading to diversification and creation of many incompatible data storage methods. The High Sierra Group Proposal (HSGP) was released in May 1986, defining a file system for CD-ROMs commonly known as the High Sierra Format.
A draft version of this proposal was submitted to the European Computer Manufacturers Association (ECMA) for standardization. With some changes, this led to the issue of the initial edition of the ECMA-119 standard in December 1986. The ECMA submitted their standard to the International Standards Organization (ISO) for fast tracking, where it was further refined into the ISO 9660 standard. For compatibility the second edition of ECMA-119 was revised to be equivalent to ISO 9660 in December 1987. ISO 9660:1988 was published in 1988. The main changes from the High Sierra Format in the ECMA-119 and ISO 9660 standards were international extensions to allow the format to work better on non-US markets.
In order not to create incompatibilities, NISO suspended further work on Z39.60, which had been adopted by NISO members on 28 May 1987. It was withdrawn before final approval, in favour of ISO 9660.
JIS X 0606:1998 was passed in Japan in 1998 with much-relaxed file name rules using a new "enhanced volume descriptor" data structure. The standard was submitted for ISO 9660:1999 and supposedly fast-tracked, but nothing came out of it. Nevertheless, several operating systems and disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) now support the addition, under such names as "ISO 9660:1999", "ISO 9660 v2", or "ISO 9660 Level 4". In 2013, the proposal was finally formalized in the form of ISO 9660/Amendment 1, intended to "bring harmonization between ISO 9660 and widely used 'Joliet Specification'." In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1.
In 2019, ECMA published a 4th version of ECMA-119, integrating the Joliet text as "Annex C".
In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard.
The following is the rough overall structure of the ISO 9660 file system.
Multi-byte values can be stored in three different formats: little-endian, big-endian, and in a concatenation of both types in what the specification calls "both-byte" order. Both-byte order is required in several fields in the volume descriptors and directory records, while path tables can be either little-endian or big-endian.
The system area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. While it is suggested that they are reserved for use by bootable media, a CD-ROM may contain an alternative file system descriptor in this area, and it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content.
The data area begins with the volume descriptor set, a set of one or more volume descriptors terminated with a volume descriptor set terminator. These collectively act as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks).
Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure:
The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt.
Standard volume descriptor types are the following:
An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator for indicating the end of the descriptor sequence. The volume descriptor set terminator is simply a particular type of volume descriptor with the purpose of marking the end of this set of structures. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks).
In addition to the primary volume descriptor(s), supplementary volume descriptors or enhanced volume descriptors may be present.
Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The parent directory number is a 16-bit number, limiting its range from 1 to 65,535.
Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time.
The standard specifies three nested levels of interchange (paraphrased from section 10):
Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1).
The standard also specifies the following name restrictions (sections 7.5 and 7.6):
A CD-ROM producer may choose one of the lower Levels of Interchange specified in chapter 10 of the standard, and further restrict file name length from 30 characters to only 8+3 in file identifiers, and 8 in directory identifiers in order to promote interchangeability with implementations that do not implement the full standard.
All numbers in ISO 9660 file systems except the single byte value used for the GMT offset are unsigned numbers. As the length of a file's extent on disc is stored in a 32 bit value, it allows for a maximum length of just over 4.2 GB (more precisely, one byte less than 4 GiB). It is possible to circumvent this limitation by using the multi-extent (fragmentation) feature of ISO 9660 Level 3 to create ISO 9660 file systems and single files up to 8 TB. With this, files larger than 4 GiB can be split up into multiple extents (sequential series of sectors), each not exceeding the 4 GiB limit. For example, the free software such as InfraRecorder, ImgBurn and mkisofs as well as Roxio Toast are able to create ISO 9660 file systems that use multi-extent files to store files larger than 4 GiB on appropriate media such as recordable DVDs. Linux supports multiple extents.
Since amendment 1 (or ECMA-119 3rd edition, or "JIS X 0606:1998 / ISO 9660:1999"), a much wider variety of file trees can be expressed by the EVD system. There is no longer any character limit (even 8-bit characters are allowed), nor any depth limit or path length limit. There still is a limit on name length, at 207. The character set is no longer enforced, so both sides of the disc interchange need to agree via a different channel.
There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more).
System Use Sharing Protocol (SUSP, IEEE P1281) provides a generic way of including additional properties for any directory entry reachable from the primary volume descriptor (PVD). In an ISO 9660 volume, every directory entry has an optional system use area whose contents are undefined and left to be interpreted by the system. SUSP defines a method to subdivide that area into multiple system use fields, each identified by a two-character signature tag. The idea behind SUSP was that it would enable any number of independent extensions to ISO 9660 to be created and included on a volume without conflicting. It also allows for the inclusion of property data that would otherwise be too large to fit within the limits of the system use area.
SUSP defines several common tags and system use fields:
Other known SUSP fields include:
The Apple extensions do not technically follow the SUSP standard; however the basic structure of the AA and AB fields defined by Apple are forward compatible with SUSP; so that, with care, a volume can use both Apple extensions as well as RRIP extensions.
The Rock Ridge Interchange Protocol (RRIP, IEEE P1282) is an extension which adds POSIX file system semantics. The availability of these extension properties allows for better integration with Unix and Unix-like operating systems. The standard takes its name from the fictional town Rock Ridge in Mel Brooks' film Blazing Saddles. The RRIP extensions are, briefly:
The RRIP extensions are built upon SUSP, defining additional tags for support of POSIX semantics, along with the format and meaning of the corresponding system use fields:
Amiga Rock Ridge is similar to RRIP, except it provides additional properties used by AmigaOS. It too is built on the SUSP standard by defining an "AS"-tagged system use field. Thus both Amiga Rock Ridge and the POSIX RRIP may be used simultaneously on the same volume. Some of the specific properties supported by this extension are the additional Amiga-bits for files. There is support for attribute "P" that stands for "pure" bit (indicating re-entrant command) and attribute "S" for script bit (indicating batch file). This includes the protection flags plus an optional comment field. These extensions were introduced by Angela Schmidt with the help of Andrew Young, the primary author of the Rock Ridge Interchange Protocol and System Use Sharing Protocol. The first publicly available software to master a CD-ROM with Amiga extensions was MakeCD, an Amiga software which Angela Schmidt developed together with Patrick Ohly.
El Torito is an extension designed to allow booting a computer from a CD-ROM. It was announced in November 1994 and first issued in January 1995 as a joint proposal by IBM and BIOS manufacturer Phoenix Technologies. According to legend, the El Torito CD/DVD extension to ISO 9660 got its name because its design originated in an El Torito restaurant in Irvine, California (33°41′05″N 117°51′09″W / 33.684722°N 117.852547°W / 33.684722; -117.852547). The initial two authors were Curtis Stevens, of Phoenix Technologies, and Stan Merkin, of IBM.
A 32-bit PC BIOS will search for boot code on an ISO 9660 CD-ROM. The standard allows for booting in two different modes. Either in hard disk emulation when the boot information can be accessed directly from the CD media, or in floppy emulation mode where the boot information is stored in an image file of a floppy disk, which is loaded from the CD and then behaves as a virtual floppy disk. This is useful for computers that were designed to boot only from a floppy drive. For modern computers the "no emulation" mode is generally the more reliable method. The BIOS will assign a BIOS drive number to the CD drive. The drive number (for INT 13H) assigned is any of 80hex (hard disk emulation), 00hex (floppy disk emulation) or an arbitrary number if the BIOS should not provide emulation. Emulation is useful for booting older operating systems from a CD, by making it appear to them as if they were booted from a hard or floppy disk.
El Torito can also be used to produce CDs which can boot up Linux operating systems, by including the GRUB bootloader on the CD and following the Multiboot Specification. While the El Torito spec alludes to a "Mac" platform ID, PowerPC-based Apple Macintosh computers don't use it.
Joliet is an extension specified and endorsed by Microsoft and has been supported by all versions of its Windows operating system since Windows 95 and Windows NT 4.0. Its primary focus is the relaxation of the filename restrictions inherent with full ISO 9660 compliance. Joliet accomplishes this by supplying an additional set of filenames that are encoded in UCS-2BE (UTF-16BE in practice since Windows 2000). These filenames are stored in a special supplementary volume descriptor, that is safely ignored by ISO 9660-compliant software, thus preserving backward compatibility. The specification only allows filenames to be up to 64 Unicode characters in length. However, the documentation for mkisofs states filenames up to 103 characters in length do not appear to cause problems. Microsoft has documented it "can use up to 110 characters." The difference lies in whether CDXA extension space is used.
Joliet allows Unicode characters to be used for all text fields, which includes file names and the volume name. A "Secondary" volume descriptor with type 2 contains the same information as the Primary one (sector 16 offset 40 bytes), but in UCS-2BE in sector 17, offset 40 bytes. As a result of this, the volume name is limited to 16 characters.
Many current PC operating systems are able to read Joliet-formatted media, thus allowing exchange of files between those operating systems even if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), which was formerly not possible with plain ISO 9660-formatted media. Operating systems which can read Joliet media include:
Romeo was developed by Adaptec and allows the use of long filenames up to 128 characters, written directly into the primary volume descriptor using the current code page. This format is built around the workings of Windows 9x and Windows NT "CDFS" drivers. When a Windows installation of a different language opens a Romeo disk, the lack of code page indication will cause non-ASCII characters in file names to become Mojibake. For example, "ü" may become "³". A different OS may encounter a similar problem or refuse to recognize these noncompliant names outright.
The same code page problem technically exists in standard ISO 9660, which allows open interpretation of the supplemental and enhanced volume descriptors to any character encoding subject to agreement. However, the primary volume descriptor is guaranteed to be a small subset of ASCII.
Apple Computer authored a set of extensions that add ProDOS or HFS/HFS+ (the primary contemporary file systems for the classic Mac OS) properties to the filesystem. Some of the additional metadata properties include:
In order to allow non-Macintosh systems to access Macintosh files on CD-ROMs, Apple chose to use an extension of the standard ISO 9660 format. Most of the data, other than the Apple specific metadata, remains visible to operating systems that are able to read ISO 9660.
For operating systems which do not support any extensions, a name translation file TRANS.TBL must be used. The TRANS.TBL file is a plain ASCII text file. Each line contains three fields, separated by an arbitrary amount of whitespace:
Most implementations that create TRANS.TBL files put a single space between the file type and ISO 9660 name and some arbitrary number of tabs between the ISO 9660 filename and the extended filename.
Native support for using TRANS.TBL still exists in many ISO 9660 implementations, particularly those related to Unix. However, it has long since been superseded by other extensions, and modern utilities that create ISO 9660 images either cannot create TRANS.TBL files at all, or no longer create them unless explicitly requested by the user. Since a TRANS.TBL file has no special identification other than its name, it can also be created separately and included in the directory before filesystem creation.
The ISO 13490 standard is an extension to the ISO 9660 format that adds support for multiple sessions on a disc. Since ISO 9660 is by design a read-only, pre-mastered file system, all the data has to be written in one go or "session" to the medium. Once written, there is no provision for altering the stored content. ISO 13490 was created to allow adding more files to a writeable disc such as CD-R in multiple sessions.
The ISO 13346/ECMA-167 standard was designed in conjunction to the ISO 13490 standard. This new format addresses most of the shortcomings of ISO 9660, and a subset of it evolved into the Universal Disk Format (UDF), which was adopted for DVDs. The volume descriptor table retains the ISO9660 layout, but the identifier has been updated.
Optical disc images are a common way to electronically transfer the contents of CD-ROMs. They often have the filename extension .iso (.iso9660 is less common, but also in use) and are commonly referred to as "ISOs".
Most operating systems support reading of ISO 9660 formatted discs, and most new versions support the extensions such as Rock Ridge and Joliet. Operating systems that do not support the extensions usually show the basic (non-extended) features of a plain ISO 9660 disc.
Operating systems that support ISO 9660 and its extensions include the following: | [
{
"paragraph_id": 0,
"text": "ISO 9660 (also known as ECMA-119) is a file system for optical disc media. The file system is an international standard available from the International Organization for Standardization (ISO). Since the specification is available for anybody to purchase, implementations have been written for many operating systems.",
"title": ""
},
{
"paragraph_id": 1,
"text": "ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set (a header block which describes the subsequent layout) and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry.",
"title": ""
},
{
"paragraph_id": 3,
"text": "There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Compact discs were originally developed for recording musical data, but soon were used for storing additional digital data types because they were equally effective for archival mass data storage. Called CD-ROMs, the lowest level format for these type of compact discs was defined in the Yellow Book specification in 1983. However, this book did not define any format for organizing data on CD-ROMs into logical units such as files, which led to every CD-ROM maker creating its own format. In order to develop a CD-ROM file system standard (Z39.60 - Volume and File Structure of CDROM for Information Interchange), the National Information Standards Organization (NISO) set up Standards Committee SC EE (Compact Disc Data Format) in July 1985. In September/ October 1985 several companies invited experts to participate in the development of a working paper for such a standard.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In November 1985, representatives of computer hardware manufacturers gathered at the High Sierra Hotel and Casino (currently called the Hard Rock Hotel and Casino) near Lake Tahoe, California. This group became known as the High Sierra Group (HSG). Present at the meeting were representatives from Apple Computer, AT&T, Digital Equipment Corporation (DEC), Hitachi, LaserData, Microware, Microsoft, 3M, Philips, Reference Technology Inc., Sony Corporation, TMS Inc., VideoTools (later Meridian), Xebec, and Yelick. The meeting report evolved from the Yellow Book CD-ROM standard, which was so open ended it was leading to diversification and creation of many incompatible data storage methods. The High Sierra Group Proposal (HSGP) was released in May 1986, defining a file system for CD-ROMs commonly known as the High Sierra Format.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "A draft version of this proposal was submitted to the European Computer Manufacturers Association (ECMA) for standardization. With some changes, this led to the issue of the initial edition of the ECMA-119 standard in December 1986. The ECMA submitted their standard to the International Standards Organization (ISO) for fast tracking, where it was further refined into the ISO 9660 standard. For compatibility the second edition of ECMA-119 was revised to be equivalent to ISO 9660 in December 1987. ISO 9660:1988 was published in 1988. The main changes from the High Sierra Format in the ECMA-119 and ISO 9660 standards were international extensions to allow the format to work better on non-US markets.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In order not to create incompatibilities, NISO suspended further work on Z39.60, which had been adopted by NISO members on 28 May 1987. It was withdrawn before final approval, in favour of ISO 9660.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "JIS X 0606:1998 was passed in Japan in 1998 with much-relaxed file name rules using a new \"enhanced volume descriptor\" data structure. The standard was submitted for ISO 9660:1999 and supposedly fast-tracked, but nothing came out of it. Nevertheless, several operating systems and disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) now support the addition, under such names as \"ISO 9660:1999\", \"ISO 9660 v2\", or \"ISO 9660 Level 4\". In 2013, the proposal was finally formalized in the form of ISO 9660/Amendment 1, intended to \"bring harmonization between ISO 9660 and widely used 'Joliet Specification'.\" In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 2019, ECMA published a 4th version of ECMA-119, integrating the Joliet text as \"Annex C\".",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The following is the rough overall structure of the ISO 9660 file system.",
"title": "Specifications"
},
{
"paragraph_id": 12,
"text": "Multi-byte values can be stored in three different formats: little-endian, big-endian, and in a concatenation of both types in what the specification calls \"both-byte\" order. Both-byte order is required in several fields in the volume descriptors and directory records, while path tables can be either little-endian or big-endian.",
"title": "Specifications"
},
{
"paragraph_id": 13,
"text": "The system area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. While it is suggested that they are reserved for use by bootable media, a CD-ROM may contain an alternative file system descriptor in this area, and it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content.",
"title": "Specifications"
},
{
"paragraph_id": 14,
"text": "The data area begins with the volume descriptor set, a set of one or more volume descriptors terminated with a volume descriptor set terminator. These collectively act as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks).",
"title": "Specifications"
},
{
"paragraph_id": 15,
"text": "Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure:",
"title": "Specifications"
},
{
"paragraph_id": 16,
"text": "The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt.",
"title": "Specifications"
},
{
"paragraph_id": 17,
"text": "Standard volume descriptor types are the following:",
"title": "Specifications"
},
{
"paragraph_id": 18,
"text": "An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator for indicating the end of the descriptor sequence. The volume descriptor set terminator is simply a particular type of volume descriptor with the purpose of marking the end of this set of structures. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks).",
"title": "Specifications"
},
{
"paragraph_id": 19,
"text": "In addition to the primary volume descriptor(s), supplementary volume descriptors or enhanced volume descriptors may be present.",
"title": "Specifications"
},
{
"paragraph_id": 20,
"text": "Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The parent directory number is a 16-bit number, limiting its range from 1 to 65,535.",
"title": "Specifications"
},
{
"paragraph_id": 21,
"text": "Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time.",
"title": "Specifications"
},
{
"paragraph_id": 22,
"text": "The standard specifies three nested levels of interchange (paraphrased from section 10):",
"title": "Specifications"
},
{
"paragraph_id": 23,
"text": "Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1).",
"title": "Specifications"
},
{
"paragraph_id": 24,
"text": "The standard also specifies the following name restrictions (sections 7.5 and 7.6):",
"title": "Specifications"
},
{
"paragraph_id": 25,
"text": "A CD-ROM producer may choose one of the lower Levels of Interchange specified in chapter 10 of the standard, and further restrict file name length from 30 characters to only 8+3 in file identifiers, and 8 in directory identifiers in order to promote interchangeability with implementations that do not implement the full standard.",
"title": "Specifications"
},
{
"paragraph_id": 26,
"text": "All numbers in ISO 9660 file systems except the single byte value used for the GMT offset are unsigned numbers. As the length of a file's extent on disc is stored in a 32 bit value, it allows for a maximum length of just over 4.2 GB (more precisely, one byte less than 4 GiB). It is possible to circumvent this limitation by using the multi-extent (fragmentation) feature of ISO 9660 Level 3 to create ISO 9660 file systems and single files up to 8 TB. With this, files larger than 4 GiB can be split up into multiple extents (sequential series of sectors), each not exceeding the 4 GiB limit. For example, the free software such as InfraRecorder, ImgBurn and mkisofs as well as Roxio Toast are able to create ISO 9660 file systems that use multi-extent files to store files larger than 4 GiB on appropriate media such as recordable DVDs. Linux supports multiple extents.",
"title": "Specifications"
},
{
"paragraph_id": 27,
"text": "Since amendment 1 (or ECMA-119 3rd edition, or \"JIS X 0606:1998 / ISO 9660:1999\"), a much wider variety of file trees can be expressed by the EVD system. There is no longer any character limit (even 8-bit characters are allowed), nor any depth limit or path length limit. There still is a limit on name length, at 207. The character set is no longer enforced, so both sides of the disc interchange need to agree via a different channel.",
"title": "Specifications"
},
{
"paragraph_id": 28,
"text": "There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more).",
"title": "Extensions and improvements"
},
{
"paragraph_id": 29,
"text": "System Use Sharing Protocol (SUSP, IEEE P1281) provides a generic way of including additional properties for any directory entry reachable from the primary volume descriptor (PVD). In an ISO 9660 volume, every directory entry has an optional system use area whose contents are undefined and left to be interpreted by the system. SUSP defines a method to subdivide that area into multiple system use fields, each identified by a two-character signature tag. The idea behind SUSP was that it would enable any number of independent extensions to ISO 9660 to be created and included on a volume without conflicting. It also allows for the inclusion of property data that would otherwise be too large to fit within the limits of the system use area.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 30,
"text": "SUSP defines several common tags and system use fields:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 31,
"text": "Other known SUSP fields include:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 32,
"text": "The Apple extensions do not technically follow the SUSP standard; however the basic structure of the AA and AB fields defined by Apple are forward compatible with SUSP; so that, with care, a volume can use both Apple extensions as well as RRIP extensions.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 33,
"text": "The Rock Ridge Interchange Protocol (RRIP, IEEE P1282) is an extension which adds POSIX file system semantics. The availability of these extension properties allows for better integration with Unix and Unix-like operating systems. The standard takes its name from the fictional town Rock Ridge in Mel Brooks' film Blazing Saddles. The RRIP extensions are, briefly:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 34,
"text": "The RRIP extensions are built upon SUSP, defining additional tags for support of POSIX semantics, along with the format and meaning of the corresponding system use fields:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 35,
"text": "Amiga Rock Ridge is similar to RRIP, except it provides additional properties used by AmigaOS. It too is built on the SUSP standard by defining an \"AS\"-tagged system use field. Thus both Amiga Rock Ridge and the POSIX RRIP may be used simultaneously on the same volume. Some of the specific properties supported by this extension are the additional Amiga-bits for files. There is support for attribute \"P\" that stands for \"pure\" bit (indicating re-entrant command) and attribute \"S\" for script bit (indicating batch file). This includes the protection flags plus an optional comment field. These extensions were introduced by Angela Schmidt with the help of Andrew Young, the primary author of the Rock Ridge Interchange Protocol and System Use Sharing Protocol. The first publicly available software to master a CD-ROM with Amiga extensions was MakeCD, an Amiga software which Angela Schmidt developed together with Patrick Ohly.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 36,
"text": "El Torito is an extension designed to allow booting a computer from a CD-ROM. It was announced in November 1994 and first issued in January 1995 as a joint proposal by IBM and BIOS manufacturer Phoenix Technologies. According to legend, the El Torito CD/DVD extension to ISO 9660 got its name because its design originated in an El Torito restaurant in Irvine, California (33°41′05″N 117°51′09″W / 33.684722°N 117.852547°W / 33.684722; -117.852547). The initial two authors were Curtis Stevens, of Phoenix Technologies, and Stan Merkin, of IBM.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 37,
"text": "A 32-bit PC BIOS will search for boot code on an ISO 9660 CD-ROM. The standard allows for booting in two different modes. Either in hard disk emulation when the boot information can be accessed directly from the CD media, or in floppy emulation mode where the boot information is stored in an image file of a floppy disk, which is loaded from the CD and then behaves as a virtual floppy disk. This is useful for computers that were designed to boot only from a floppy drive. For modern computers the \"no emulation\" mode is generally the more reliable method. The BIOS will assign a BIOS drive number to the CD drive. The drive number (for INT 13H) assigned is any of 80hex (hard disk emulation), 00hex (floppy disk emulation) or an arbitrary number if the BIOS should not provide emulation. Emulation is useful for booting older operating systems from a CD, by making it appear to them as if they were booted from a hard or floppy disk.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 38,
"text": "El Torito can also be used to produce CDs which can boot up Linux operating systems, by including the GRUB bootloader on the CD and following the Multiboot Specification. While the El Torito spec alludes to a \"Mac\" platform ID, PowerPC-based Apple Macintosh computers don't use it.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 39,
"text": "Joliet is an extension specified and endorsed by Microsoft and has been supported by all versions of its Windows operating system since Windows 95 and Windows NT 4.0. Its primary focus is the relaxation of the filename restrictions inherent with full ISO 9660 compliance. Joliet accomplishes this by supplying an additional set of filenames that are encoded in UCS-2BE (UTF-16BE in practice since Windows 2000). These filenames are stored in a special supplementary volume descriptor, that is safely ignored by ISO 9660-compliant software, thus preserving backward compatibility. The specification only allows filenames to be up to 64 Unicode characters in length. However, the documentation for mkisofs states filenames up to 103 characters in length do not appear to cause problems. Microsoft has documented it \"can use up to 110 characters.\" The difference lies in whether CDXA extension space is used.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 40,
"text": "Joliet allows Unicode characters to be used for all text fields, which includes file names and the volume name. A \"Secondary\" volume descriptor with type 2 contains the same information as the Primary one (sector 16 offset 40 bytes), but in UCS-2BE in sector 17, offset 40 bytes. As a result of this, the volume name is limited to 16 characters.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 41,
"text": "Many current PC operating systems are able to read Joliet-formatted media, thus allowing exchange of files between those operating systems even if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), which was formerly not possible with plain ISO 9660-formatted media. Operating systems which can read Joliet media include:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 42,
"text": "Romeo was developed by Adaptec and allows the use of long filenames up to 128 characters, written directly into the primary volume descriptor using the current code page. This format is built around the workings of Windows 9x and Windows NT \"CDFS\" drivers. When a Windows installation of a different language opens a Romeo disk, the lack of code page indication will cause non-ASCII characters in file names to become Mojibake. For example, \"ü\" may become \"³\". A different OS may encounter a similar problem or refuse to recognize these noncompliant names outright.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 43,
"text": "The same code page problem technically exists in standard ISO 9660, which allows open interpretation of the supplemental and enhanced volume descriptors to any character encoding subject to agreement. However, the primary volume descriptor is guaranteed to be a small subset of ASCII.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 44,
"text": "Apple Computer authored a set of extensions that add ProDOS or HFS/HFS+ (the primary contemporary file systems for the classic Mac OS) properties to the filesystem. Some of the additional metadata properties include:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 45,
"text": "In order to allow non-Macintosh systems to access Macintosh files on CD-ROMs, Apple chose to use an extension of the standard ISO 9660 format. Most of the data, other than the Apple specific metadata, remains visible to operating systems that are able to read ISO 9660.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 46,
"text": "For operating systems which do not support any extensions, a name translation file TRANS.TBL must be used. The TRANS.TBL file is a plain ASCII text file. Each line contains three fields, separated by an arbitrary amount of whitespace:",
"title": "Extensions and improvements"
},
{
"paragraph_id": 47,
"text": "Most implementations that create TRANS.TBL files put a single space between the file type and ISO 9660 name and some arbitrary number of tabs between the ISO 9660 filename and the extended filename.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 48,
"text": "Native support for using TRANS.TBL still exists in many ISO 9660 implementations, particularly those related to Unix. However, it has long since been superseded by other extensions, and modern utilities that create ISO 9660 images either cannot create TRANS.TBL files at all, or no longer create them unless explicitly requested by the user. Since a TRANS.TBL file has no special identification other than its name, it can also be created separately and included in the directory before filesystem creation.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 49,
"text": "The ISO 13490 standard is an extension to the ISO 9660 format that adds support for multiple sessions on a disc. Since ISO 9660 is by design a read-only, pre-mastered file system, all the data has to be written in one go or \"session\" to the medium. Once written, there is no provision for altering the stored content. ISO 13490 was created to allow adding more files to a writeable disc such as CD-R in multiple sessions.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 50,
"text": "The ISO 13346/ECMA-167 standard was designed in conjunction to the ISO 13490 standard. This new format addresses most of the shortcomings of ISO 9660, and a subset of it evolved into the Universal Disk Format (UDF), which was adopted for DVDs. The volume descriptor table retains the ISO9660 layout, but the identifier has been updated.",
"title": "Extensions and improvements"
},
{
"paragraph_id": 51,
"text": "Optical disc images are a common way to electronically transfer the contents of CD-ROMs. They often have the filename extension .iso (.iso9660 is less common, but also in use) and are commonly referred to as \"ISOs\".",
"title": "Disc images"
},
{
"paragraph_id": 52,
"text": "Most operating systems support reading of ISO 9660 formatted discs, and most new versions support the extensions such as Rock Ridge and Joliet. Operating systems that do not support the extensions usually show the basic (non-extended) features of a plain ISO 9660 disc.",
"title": "Platforms"
},
{
"paragraph_id": 53,
"text": "Operating systems that support ISO 9660 and its extensions include the following:",
"title": "Platforms"
}
]
| ISO 9660 is a file system for optical disc media. The file system is an international standard available from the International Organization for Standardization (ISO). Since the specification is available for anybody to purchase, implementations have been written for many operating systems. ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes and name attributes, and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020. The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge, Joliet, El Torito and the Apple ISO 9660 Extensions. | 2001-10-30T23:42:21Z | 2023-11-20T09:36:20Z | [
"Template:CoordDec",
"Template:Reflist",
"Template:Cite book",
"Template:Cite mailing list",
"Template:Man",
"Template:File systems",
"Template:Short description",
"Template:Citation needed",
"Template:Infobox file system",
"Template:Cite press release",
"Template:Cite news",
"Template:Ecma International Standards",
"Template:ISO standards",
"Template:Cite web",
"Template:Cite journal",
"Template:Missing information",
"Template:Freshmeat",
"Template:Use dmy dates",
"Template:Optical disc authoring"
]
| https://en.wikipedia.org/wiki/ISO_9660 |
15,146 | Ice skating | Ice skating is the self-propulsion and gliding of a person across an ice surface, using metal-bladed ice skates. People skate for various reasons, including recreation (fun), exercise, competitive sports, and commuting. Ice skating may be performed on naturally frozen bodies of water, such as ponds, lakes, canals, and rivers, and on human-made ice surfaces both indoors and outdoors.
Natural ice surfaces used by skaters can accommodate a variety of winter sports which generally require an enclosed area, but are also used by skaters who need ice tracks and trails for distance skating and speed skating. Man-made ice surfaces include ice rinks, ice hockey rinks, bandy fields, ice tracks required for the sport of ice cross downhill, and arenas.
Various formal sports involving ice skating have emerged since the 19th century. Ice hockey, bandy, rinkball, and ringette, are team sports played with, respectively, a flat sliding puck, a ball, and a rubber ring. Synchronized skating is a unique artistic team sport derived from figure skating. Figure skating, ice cross downhill, speed skating, and barrel jumping (a discipline of speed skating), are among the sporting disciplines for individuals.
Research suggests that the earliest ice skating happened in southern Finland more than 4,000 years ago. This was done to save energy during winter journeys. True skating emerged when a steel blade with sharpened edges was used. Skates now cut into the ice instead of gliding on top of it. The Dutch added edges to ice skates in the 13th or 14th century. These ice skates were made of steel, with sharpened edges on the bottom to aid movement.
The fundamental construction of modern ice skates has stayed largely the same since then, although differing greatly in the details, particularly in the method of binding and the shape and construction of the steel blades. In the Netherlands, ice skating was considered proper for all classes of people, as shown in many pictures from Dutch Golden Age painters.
Ice skating was also practiced in China during the Song dynasty, and became popular among the ruling family of the Qing dynasty. Ancient ice skates made of animal bones, were found at the bronze age Gaotai Ruins in north west China, and are estimated to be likely 3,500 years old. Archeologists say these ancient skates are "clear evidence for communication between China and Europe" in the Bronze Age era, as they are very similar to bone skates unearthed in Europe.
In England "the London boys" had improvised butcher's bones as skates since the 12th century. Skating on metal skates seems to have arrived in England at the same time as the garden canal, with the English Restoration in 1660, after the king and court returned from an exile largely spent in the Netherlands. In London the ornamental "canal" in St James's Park was the main centre until the 19th century. Both Samuel Pepys and John Evelyn, the two leading diarists of the day, saw it on the "new canal" there on 1 December 1662, the first time Pepys had ever seen it ("a very pretty art"). Then it was "performed before their Majesties and others, by diverse gentlemen and others, with scheets after the manner of the Hollanders". Two weeks later, on 15 December 1662, Pepys accompanied the Duke of York, later King James II, on a skating outing: "To the Duke, and followed him in the Park, when, though the ice was broken, he would go slide upon his skates, which I did not like; but he slides very well." In 1711 Jonathan Swift still thinks the sport might be unfamiliar to his "Stella", writing to her: "Delicate walking weather; and the Canal and Rosamund's Pond full of the rabble and with skates, if you know what that is."
The first organised skating club was the Edinburgh Skating Club, formed in the 1740s; some claim the club was established as early as 1642.
An early contemporary reference to the club appeared in the second edition (1783) of the Encyclopædia Britannica:
The metropolis of Scotland has produced more instances of elegant skaters than perhaps any country whatever: and the institution of a skating club about 40 years ago has contributed not a little to the improvement of this elegant amusement.
From this description and others, it is apparent that the form of skating practiced by club members was indeed an early form of figure skating rather than speed skating. For admission to the club, candidates had to pass a skating test where they performed a complete circle on either foot (e.g., a figure eight), and then jumped over first one hat, then two and three, placed over each other on the ice.
On the Continent, participation in ice skating was limited to members of the upper classes. Emperor Rudolf II of the Holy Roman Empire enjoyed ice skating so much, he had a large ice carnival constructed in his court in order to popularise the sport. King Louis XVI of France brought ice skating to Paris during his reign. Madame de Pompadour, Napoleon I, Napoleon III, and the House of Stuart were, among others, royal and upper-class fans of ice skating.
The next skating club to be established was in London and was not founded until 1830. Members wore a silver skate hanging from their buttonhole and met on The Serpentine, Hyde Park on 27th December, 1830. By the mid-19th century, ice skating was a popular pastime among the British upper and middle classes. Queen Victoria became acquainted with her future husband, Prince Albert, through a series of ice skating trips. Albert continued to skate after their marriage and on falling through the ice was once rescued by Victoria and a lady in waiting from a stretch of water in the grounds of Buckingham Palace.
Early attempts at the construction of artificial ice rinks were made during the "rink mania" of 1841–44. As the technology for the maintenance of natural ice did not exist, these early rinks used a substitute consisting of a mixture of hog's lard and various salts. An item in the 8 May 1844 issue of Littell's 'Living Age' headed the 'Glaciarium' reported that "This establishment, which has been removed to Grafton Street East' Tottenham Court Road, was opened on Monday afternoon. The area of artificial ice is extremely convenient for such as may be desirous of engaging in the graceful and manly pastime of skating."
Skating became popular as a recreation, a means of transport and spectator sport in The Fens in England for people from all walks of life. Racing was the preserve of workers, most of them agricultural labourers. It is not known when the first skating matches were held, but by the early nineteenth century racing was well established and the results of matches were reported in the press. Skating as a sport developed on the lakes of Scotland and the canals of the Netherlands. In the 13th and 14th centuries wood was substituted for bone in skate blades, and in 1572 the first iron skates were manufactured. When the waters froze, skating matches were held in towns and villages all over the Fens. In these local matches men (or sometimes women or children) would compete for prizes of money, clothing, or food.
The winners of local matches were invited to take part in the grand or championship matches, in which skaters from across the Fens would compete for cash prizes in front of crowds of thousands. The championship matches took the form of a Welsh main or "last man standing" contest (single-elimination tournament). The competitors, 16 or sometimes 32, were paired off in heats and the winner of each heat went through to the next round. A course of 660 yards was measured out on the ice, and a barrel with a flag on it placed at either end. For a one-and-a-half-mile race the skaters completed two rounds of the course, with three barrel turns.
In the Fens, skates were called pattens, fen runners, or Whittlesey runners. The footstock was made of beechwood. A screw at the back was screwed into the heel of the boot, and three small spikes at the front kept the skate steady. There were holes in the footstock for leather straps to fasten it to the foot. The metal blades were slightly higher at the back than the front. In the 1890s, fen skaters started to race in Norwegian style skates.
On Saturday 1 February 1879, a number of professional ice skaters from Cambridgeshire and Huntingdonshire met in the Guildhall, Cambridge, to set up the National Skating Association, the first national ice skating body in the world. The founding committee consisted of several landowners, a vicar, a fellow of Trinity College, a magistrate, two members of parliament, the mayor of Cambridge, the Lord Lieutenant of Cambridge, journalist James Drake Digby, the president of Cambridge University Skating Club, and Neville Goodman, a graduate of Peterhouse, Cambridge (and son of Potto Brown's milling partner, Joseph Goodman). The newly formed Association held their first one-and-a-half-mile British professional championship at Thorney in December 1879.
The first instructional book concerning ice skating was published in London in 1772. The book titled The Art of Figure Skating, written by a British artillery lieutenant, Robert Jones, describes basic figure skating forms such as circles and figure eights. The book was written solely for men, as women did not normally ice skate in the late 18th century. It was with the publication of this manual that ice skating split into its two main disciplines, speed skating and figure skating.
The founder of modern figure skating as it is known today was Jackson Haines, an American. He was the first skater to incorporate ballet and dance movements into his skating, as opposed to focusing on tracing patterns on the ice. Haines also invented the sit spin and developed a shorter, curved blade for figure skating that allowed for easier turns. He was also the first to wear blades that were permanently attached to the boot.
The International Skating Union was founded in 1892 as the first international ice skating organisation in Scheveningen, in the Netherlands. The Union created the first codified set of figure skating rules and governed international competition in speed and figure skating. The first Championship, known as the Championship of the Internationale Eislauf-Vereingung, was held in Saint Petersburg in 1896. The event had four competitors and was won by Gilbert Fuchs.
A skate can glide over ice because there is a layer of ice molecules on the surface that are not as tightly bound as the molecules of the mass of ice beneath. These molecules are in a semiliquid state, providing lubrication. The molecules in this "quasi-fluid" or "water-like" layer are less mobile than liquid water, but are much more mobile than the molecules deeper in the ice. At about −157 °C (−250 °F) the slippery layer is one molecule thick; as the temperature increases the slippery layer becomes thicker.
It had long been believed that ice is slippery because the pressure of an object in contact with it causes a thin layer to melt. The hypothesis was that the blade of an ice skate, exerting pressure on the ice, melts a thin layer, providing lubrication between the ice and the blade. This explanation, called "pressure melting", originated in the 19th century. (See Regelation.) Pressure melting could not account for skating on ice temperatures lower than −3.5 °C, whereas skaters often skate on lower-temperature ice.
In the 20th century, an alternative explanation, called "friction melting", proposed by Lozowski, Szilder, Le Berre, Pomeau, and others showed that because of the viscous frictional heating, a macroscopic layer of melt ice is in-between the ice and the skate. With this they fully explained the low friction with nothing else but macroscopic physics, whereby the frictional heat generated between skate and ice melts a layer of ice. This is a self-stabilizing mechanism of skating. If by fluctuation the friction gets high, the layer grows in thickness and lowers the friction, and if it gets low, the layer decreases in thickness and increases the friction. The friction generated in the sheared layer of water between skate and ice grows as √V with V the velocity of the skater, such that for low velocities the friction is also low.
Whatever the origin of the water layer, skating is more destructive than simply gliding. A skater leaves a visible trail behind on virgin ice and skating rinks have to be regularly resurfaced to improve the skating conditions. It means that the deformation caused by the skate is plastic rather than elastic. The skate ploughs through the ice in particular due to the sharp edges. Thus another component has to be added to the friction: the "ploughing friction". The calculated frictions are of the same order as the measured frictions in real skating in a rink. The ploughing friction decreases with the velocity V, since the pressure in the water layer increases with V and lifts the skate (aquaplaning). As a result the sum of the water-layer friction and the ploughing friction only increases slightly with V, making skating at high speeds (>90 km/h) possible.
A person's ability to ice skate depends on the roughness of the ice, the design of the ice skate, and the skill and experience of the skater. While serious injury is rare, a number of short track speed skaters have been paralysed after a heavy fall when they collided with the boarding. A fall can be fatal if a helmet is not worn to protect against severe head injury. Accidents are rare but there is a risk of injury from collisions, particularly during hockey games or in pair skating.
A significant danger when skating outdoors on a frozen body of water is falling through the ice into the freezing water underneath. Death can result from shock, hypothermia, or drowning. It is often difficult or impossible for the skater to climb out of the water, due to the weight of their ice skates and thick winter clothing, and the ice repeatedly breaking as they struggle to get back onto the surface. Also, if the skater becomes disoriented under the water, they might not be able to find the hole in the ice through which they have fallen. Although this can prove fatal, it is also possible for the rapid cooling to produce a condition in which a person can be revived up to hours after falling into the water. Experts have warned not to ice skate alone, and also warned parents not to leave children unattended on a frozen body of water.
A number of recreational and sporting activities take place on ice:
The following sports and games are also played on ice, but players are not required to wear ice skates. | [
{
"paragraph_id": 0,
"text": "Ice skating is the self-propulsion and gliding of a person across an ice surface, using metal-bladed ice skates. People skate for various reasons, including recreation (fun), exercise, competitive sports, and commuting. Ice skating may be performed on naturally frozen bodies of water, such as ponds, lakes, canals, and rivers, and on human-made ice surfaces both indoors and outdoors.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Natural ice surfaces used by skaters can accommodate a variety of winter sports which generally require an enclosed area, but are also used by skaters who need ice tracks and trails for distance skating and speed skating. Man-made ice surfaces include ice rinks, ice hockey rinks, bandy fields, ice tracks required for the sport of ice cross downhill, and arenas.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Various formal sports involving ice skating have emerged since the 19th century. Ice hockey, bandy, rinkball, and ringette, are team sports played with, respectively, a flat sliding puck, a ball, and a rubber ring. Synchronized skating is a unique artistic team sport derived from figure skating. Figure skating, ice cross downhill, speed skating, and barrel jumping (a discipline of speed skating), are among the sporting disciplines for individuals.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Research suggests that the earliest ice skating happened in southern Finland more than 4,000 years ago. This was done to save energy during winter journeys. True skating emerged when a steel blade with sharpened edges was used. Skates now cut into the ice instead of gliding on top of it. The Dutch added edges to ice skates in the 13th or 14th century. These ice skates were made of steel, with sharpened edges on the bottom to aid movement.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The fundamental construction of modern ice skates has stayed largely the same since then, although differing greatly in the details, particularly in the method of binding and the shape and construction of the steel blades. In the Netherlands, ice skating was considered proper for all classes of people, as shown in many pictures from Dutch Golden Age painters.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Ice skating was also practiced in China during the Song dynasty, and became popular among the ruling family of the Qing dynasty. Ancient ice skates made of animal bones, were found at the bronze age Gaotai Ruins in north west China, and are estimated to be likely 3,500 years old. Archeologists say these ancient skates are \"clear evidence for communication between China and Europe\" in the Bronze Age era, as they are very similar to bone skates unearthed in Europe.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In England \"the London boys\" had improvised butcher's bones as skates since the 12th century. Skating on metal skates seems to have arrived in England at the same time as the garden canal, with the English Restoration in 1660, after the king and court returned from an exile largely spent in the Netherlands. In London the ornamental \"canal\" in St James's Park was the main centre until the 19th century. Both Samuel Pepys and John Evelyn, the two leading diarists of the day, saw it on the \"new canal\" there on 1 December 1662, the first time Pepys had ever seen it (\"a very pretty art\"). Then it was \"performed before their Majesties and others, by diverse gentlemen and others, with scheets after the manner of the Hollanders\". Two weeks later, on 15 December 1662, Pepys accompanied the Duke of York, later King James II, on a skating outing: \"To the Duke, and followed him in the Park, when, though the ice was broken, he would go slide upon his skates, which I did not like; but he slides very well.\" In 1711 Jonathan Swift still thinks the sport might be unfamiliar to his \"Stella\", writing to her: \"Delicate walking weather; and the Canal and Rosamund's Pond full of the rabble and with skates, if you know what that is.\"",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The first organised skating club was the Edinburgh Skating Club, formed in the 1740s; some claim the club was established as early as 1642.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "An early contemporary reference to the club appeared in the second edition (1783) of the Encyclopædia Britannica:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The metropolis of Scotland has produced more instances of elegant skaters than perhaps any country whatever: and the institution of a skating club about 40 years ago has contributed not a little to the improvement of this elegant amusement.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "From this description and others, it is apparent that the form of skating practiced by club members was indeed an early form of figure skating rather than speed skating. For admission to the club, candidates had to pass a skating test where they performed a complete circle on either foot (e.g., a figure eight), and then jumped over first one hat, then two and three, placed over each other on the ice.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "On the Continent, participation in ice skating was limited to members of the upper classes. Emperor Rudolf II of the Holy Roman Empire enjoyed ice skating so much, he had a large ice carnival constructed in his court in order to popularise the sport. King Louis XVI of France brought ice skating to Paris during his reign. Madame de Pompadour, Napoleon I, Napoleon III, and the House of Stuart were, among others, royal and upper-class fans of ice skating.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The next skating club to be established was in London and was not founded until 1830. Members wore a silver skate hanging from their buttonhole and met on The Serpentine, Hyde Park on 27th December, 1830. By the mid-19th century, ice skating was a popular pastime among the British upper and middle classes. Queen Victoria became acquainted with her future husband, Prince Albert, through a series of ice skating trips. Albert continued to skate after their marriage and on falling through the ice was once rescued by Victoria and a lady in waiting from a stretch of water in the grounds of Buckingham Palace.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Early attempts at the construction of artificial ice rinks were made during the \"rink mania\" of 1841–44. As the technology for the maintenance of natural ice did not exist, these early rinks used a substitute consisting of a mixture of hog's lard and various salts. An item in the 8 May 1844 issue of Littell's 'Living Age' headed the 'Glaciarium' reported that \"This establishment, which has been removed to Grafton Street East' Tottenham Court Road, was opened on Monday afternoon. The area of artificial ice is extremely convenient for such as may be desirous of engaging in the graceful and manly pastime of skating.\"",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Skating became popular as a recreation, a means of transport and spectator sport in The Fens in England for people from all walks of life. Racing was the preserve of workers, most of them agricultural labourers. It is not known when the first skating matches were held, but by the early nineteenth century racing was well established and the results of matches were reported in the press. Skating as a sport developed on the lakes of Scotland and the canals of the Netherlands. In the 13th and 14th centuries wood was substituted for bone in skate blades, and in 1572 the first iron skates were manufactured. When the waters froze, skating matches were held in towns and villages all over the Fens. In these local matches men (or sometimes women or children) would compete for prizes of money, clothing, or food.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The winners of local matches were invited to take part in the grand or championship matches, in which skaters from across the Fens would compete for cash prizes in front of crowds of thousands. The championship matches took the form of a Welsh main or \"last man standing\" contest (single-elimination tournament). The competitors, 16 or sometimes 32, were paired off in heats and the winner of each heat went through to the next round. A course of 660 yards was measured out on the ice, and a barrel with a flag on it placed at either end. For a one-and-a-half-mile race the skaters completed two rounds of the course, with three barrel turns.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In the Fens, skates were called pattens, fen runners, or Whittlesey runners. The footstock was made of beechwood. A screw at the back was screwed into the heel of the boot, and three small spikes at the front kept the skate steady. There were holes in the footstock for leather straps to fasten it to the foot. The metal blades were slightly higher at the back than the front. In the 1890s, fen skaters started to race in Norwegian style skates.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "On Saturday 1 February 1879, a number of professional ice skaters from Cambridgeshire and Huntingdonshire met in the Guildhall, Cambridge, to set up the National Skating Association, the first national ice skating body in the world. The founding committee consisted of several landowners, a vicar, a fellow of Trinity College, a magistrate, two members of parliament, the mayor of Cambridge, the Lord Lieutenant of Cambridge, journalist James Drake Digby, the president of Cambridge University Skating Club, and Neville Goodman, a graduate of Peterhouse, Cambridge (and son of Potto Brown's milling partner, Joseph Goodman). The newly formed Association held their first one-and-a-half-mile British professional championship at Thorney in December 1879.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The first instructional book concerning ice skating was published in London in 1772. The book titled The Art of Figure Skating, written by a British artillery lieutenant, Robert Jones, describes basic figure skating forms such as circles and figure eights. The book was written solely for men, as women did not normally ice skate in the late 18th century. It was with the publication of this manual that ice skating split into its two main disciplines, speed skating and figure skating.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The founder of modern figure skating as it is known today was Jackson Haines, an American. He was the first skater to incorporate ballet and dance movements into his skating, as opposed to focusing on tracing patterns on the ice. Haines also invented the sit spin and developed a shorter, curved blade for figure skating that allowed for easier turns. He was also the first to wear blades that were permanently attached to the boot.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The International Skating Union was founded in 1892 as the first international ice skating organisation in Scheveningen, in the Netherlands. The Union created the first codified set of figure skating rules and governed international competition in speed and figure skating. The first Championship, known as the Championship of the Internationale Eislauf-Vereingung, was held in Saint Petersburg in 1896. The event had four competitors and was won by Gilbert Fuchs.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "A skate can glide over ice because there is a layer of ice molecules on the surface that are not as tightly bound as the molecules of the mass of ice beneath. These molecules are in a semiliquid state, providing lubrication. The molecules in this \"quasi-fluid\" or \"water-like\" layer are less mobile than liquid water, but are much more mobile than the molecules deeper in the ice. At about −157 °C (−250 °F) the slippery layer is one molecule thick; as the temperature increases the slippery layer becomes thicker.",
"title": " Physical mechanics of skating"
},
{
"paragraph_id": 22,
"text": "It had long been believed that ice is slippery because the pressure of an object in contact with it causes a thin layer to melt. The hypothesis was that the blade of an ice skate, exerting pressure on the ice, melts a thin layer, providing lubrication between the ice and the blade. This explanation, called \"pressure melting\", originated in the 19th century. (See Regelation.) Pressure melting could not account for skating on ice temperatures lower than −3.5 °C, whereas skaters often skate on lower-temperature ice.",
"title": " Physical mechanics of skating"
},
{
"paragraph_id": 23,
"text": "In the 20th century, an alternative explanation, called \"friction melting\", proposed by Lozowski, Szilder, Le Berre, Pomeau, and others showed that because of the viscous frictional heating, a macroscopic layer of melt ice is in-between the ice and the skate. With this they fully explained the low friction with nothing else but macroscopic physics, whereby the frictional heat generated between skate and ice melts a layer of ice. This is a self-stabilizing mechanism of skating. If by fluctuation the friction gets high, the layer grows in thickness and lowers the friction, and if it gets low, the layer decreases in thickness and increases the friction. The friction generated in the sheared layer of water between skate and ice grows as √V with V the velocity of the skater, such that for low velocities the friction is also low.",
"title": " Physical mechanics of skating"
},
{
"paragraph_id": 24,
"text": "Whatever the origin of the water layer, skating is more destructive than simply gliding. A skater leaves a visible trail behind on virgin ice and skating rinks have to be regularly resurfaced to improve the skating conditions. It means that the deformation caused by the skate is plastic rather than elastic. The skate ploughs through the ice in particular due to the sharp edges. Thus another component has to be added to the friction: the \"ploughing friction\". The calculated frictions are of the same order as the measured frictions in real skating in a rink. The ploughing friction decreases with the velocity V, since the pressure in the water layer increases with V and lifts the skate (aquaplaning). As a result the sum of the water-layer friction and the ploughing friction only increases slightly with V, making skating at high speeds (>90 km/h) possible.",
"title": " Physical mechanics of skating"
},
{
"paragraph_id": 25,
"text": "A person's ability to ice skate depends on the roughness of the ice, the design of the ice skate, and the skill and experience of the skater. While serious injury is rare, a number of short track speed skaters have been paralysed after a heavy fall when they collided with the boarding. A fall can be fatal if a helmet is not worn to protect against severe head injury. Accidents are rare but there is a risk of injury from collisions, particularly during hockey games or in pair skating.",
"title": "Inherent safety risks"
},
{
"paragraph_id": 26,
"text": "A significant danger when skating outdoors on a frozen body of water is falling through the ice into the freezing water underneath. Death can result from shock, hypothermia, or drowning. It is often difficult or impossible for the skater to climb out of the water, due to the weight of their ice skates and thick winter clothing, and the ice repeatedly breaking as they struggle to get back onto the surface. Also, if the skater becomes disoriented under the water, they might not be able to find the hole in the ice through which they have fallen. Although this can prove fatal, it is also possible for the rapid cooling to produce a condition in which a person can be revived up to hours after falling into the water. Experts have warned not to ice skate alone, and also warned parents not to leave children unattended on a frozen body of water.",
"title": "Inherent safety risks"
},
{
"paragraph_id": 27,
"text": "A number of recreational and sporting activities take place on ice:",
"title": "Communal activities on ice"
},
{
"paragraph_id": 28,
"text": "The following sports and games are also played on ice, but players are not required to wear ice skates.",
"title": "Communal activities on ice"
}
]
| Ice skating is the self-propulsion and gliding of a person across an ice surface, using metal-bladed ice skates. People skate for various reasons, including recreation (fun), exercise, competitive sports, and commuting. Ice skating may be performed on naturally frozen bodies of water, such as ponds, lakes, canals, and rivers, and on human-made ice surfaces both indoors and outdoors. Natural ice surfaces used by skaters can accommodate a variety of winter sports which generally require an enclosed area, but are also used by skaters who need ice tracks and trails for distance skating and speed skating. Man-made ice surfaces include ice rinks, ice hockey rinks, bandy fields, ice tracks required for the sport of ice cross downhill, and arenas. Various formal sports involving ice skating have emerged since the 19th century. Ice hockey, bandy, rinkball, and ringette, are team sports played with, respectively, a flat sliding puck, a ball, and a rubber ring. Synchronized skating is a unique artistic team sport derived from figure skating. Figure skating, ice cross downhill, speed skating, and barrel jumping, are among the sporting disciplines for individuals. | 2001-10-10T02:48:29Z | 2023-09-15T14:25:54Z | [
"Template:Convert",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Clear",
"Template:Anchor",
"Template:Cite news",
"Template:Commons category",
"Template:Cite EB1911",
"Template:Webarchive",
"Template:Cite journal",
"Template:Cite encyclopedia",
"Template:Wikivoyage",
"Template:Curlie",
"Template:Short description",
"Template:More citations needed",
"Template:Blockquote",
"Template:Winter Olympic sports",
"Template:Authority control",
"Template:Wiktionary",
"Template:Main",
"Template:Reflist",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Ice_skating |
15,147 | International Olympic Committee | The International Olympic Committee (IOC; French: Comité international olympique, CIO) is a non-governmental sports organisation based in Lausanne, Switzerland.
It was founded in 1894 by Pierre de Coubertin and Demetrios Vikelas, it is the authority responsible for organising the modern (Summer, Winter, and Youth) Olympic Games.
The IOC is the governing body of the National Olympic Committees (NOCs) and of the worldwide Olympic Movement, the IOC's term for all entities and individuals involved in the Olympic Games. As of 2020, 206 NOCs officially were recognised by the IOC. Its president is Thomas Bach.
Its stated mission is to promote Olympism throughout the world and to lead the Olympic Movement:
All IOC members must swear to the following:
"Honoured to be chosen as a member of the International Olympic Committee, I fully accept all the responsibilities that this office brings: I promise to serve the Olympic Movement to the best of my ability. I will respect the Olympic Charter and accept the decisions of the IOC. I will always act independently of commercial and political interests as well as of any racial or religious consideration. I will fully comply with the IOC Code of Ethics. I promise to fight against all forms of discrimination and dedicate myself in all circumstances to promote the interests of the International Olympic Committee and Olympic Movement."
The IOC was created by Pierre de Coubertin, on 23 June 1894 with Demetrios Vikelas as its first president. As of February 2022, its membership consists of 105 active members and 45 honorary members. The IOC is the supreme authority of the worldwide modern Olympic Movement.
The IOC organizes the modern Olympic Games and Youth Olympic Games (YOG), held in summer and winter every four years. The first Summer Olympics was held in Athens, Greece, in 1896; the first Winter Olympics was in Chamonix, France, in 1924. The first Summer YOG was in Singapore in 2010, and the first Winter YOG was in Innsbruck in 2012.
Until 1992, both Summer and Winter Olympics were held in the same year. After that year, however, the IOC shifted the Winter Olympics to the even years between Summer Games to help space the planning of the two events from one another, and to improve the financial balance of the IOC, which receives a proportionally greater income in Olympic years.
Since 1995, the IOC has worked to address environmental health concerns resulting from hosting the games. In 1995, IOC President Juan Antonio Samaranch stated, "the International Olympic Committee is resolved to ensure that the environment becomes the third dimension of the organization of the Olympic Games, the first and second being sport and culture." Acting on this statement, in 1996 the IOC added the "environment" as a third pillar to its vision for the Olympic Games.
In 2000, the "Green Olympics" effort was developed by the Beijing Organizing Committee for the Beijing Olympic Games. The Beijing 2008 Summer Olympics executed over 160 projects addressing the goals of improved air quality and water quality, sustainable energy, improved waste management, and environmental education. These projects included industrial plant relocation or closure, furnace replacement, introduction of new emission standards, and more strict traffic control.
In 2009, the UN General Assembly granted the IOC Permanent Observer status. The decision enables the IOC to be directly involved in the UN Agenda and to attend UN General Assembly meetings where it can take the floor. In 1993, the General Assembly approved a Resolution to further solidify IOC–UN cooperation by reviving the Olympic Truce.
The IOC received approval in November 2015 to construct a new headquarters in Vidy, Lausanne. The cost of the project was estimated to stand at $156m. The IOC announced on the 11th of February 2019 that the "Olympic House" would be inaugurated on the 23rd of June 2019 to coincide with its 125th anniversary. The Olympic Museum remains in Ouchy, Lausanne.
Since 2002, the IOC has been involved in several high-profile controversies including taking gifts, its DMCA take down request of the 2008 Tibetan protest videos, Russian doping scandals, and its support of the Beijing 2022 Winter Olympics despite China's human rights violations documented in the Xinjiang Papers.
Detailed frameworks for environmental sustainability were prepared for the 2018 Winter Olympics, and 2020 Summer Olympics in PyeongChang, South Korea, and Tokyo.
It is an association under the Swiss Civil Code (articles 60–79).
The IOC Session is the general meeting of the members of the IOC, held once a year in which each member has one vote. It is the IOC's supreme organ and its decisions are final.
Extraordinary Sessions may be convened by the President or upon the written request of at least one third of the members.
Among others, the powers of the Session are:
For most of its existence the IOC was controlled by members who were selected by other members. Countries that had hosted the Games were allowed two members. When named they became IOC members in their respective countries rather than representatives of their respective countries to the IOC.
Membership ends under the following circumstances:
IOC recognises 82 international sports federations (IFs):
IOC awards gold, silver, and bronze medals for the top three competitors in each sporting event.
Other honours.
During the first half of the 20th century the IOC ran on a small budget. As IOC president from 1952 to 1972, Avery Brundage rejected all attempts to link the Olympics with commercial interests. Brundage believed that corporate interests would unduly impact the IOC's decision-making. Brundage's resistance to this revenue stream left IOC organising committees to negotiate their own sponsorship contracts and use the Olympic symbols.
When Brundage retired the IOC had US$2 million in assets; eight years later coffers had swollen, to US$45 million. This was primarily due to a shift in ideology toward expansion of the Games through corporate sponsorship and the sale of television rights. When Juan Antonio Samaranch was elected IOC president in 1980 his desire was to make the IOC financially independent. Samaranch appointed Canadian IOC member Richard Pound to lead the initiative as Chairman of the "New Sources of Finance Commission".
In 1982 the IOC drafted International Sport and Leisure, a Swiss sports marketing company, to develop a global marketing programme for the Olympic Movement. ISL developed the programme, but was replaced by Meridian Management, a company partly owned by the IOC in the early 1990s. In 1989, a staff member at ISL Marketing, Michael Payne, moved to the IOC and became the organisation's first marketing director. ISL and then Meridian continued in the established role as the IOC's sales and marketing agents until 2002. In collaboration with ISL Marketing and Meridian Management, Payne made major contributions to the creation of a multibillion-dollar sponsorship marketing programme for the organisation which, along with improvements in TV marketing and improved financial management, helped to restore the IOC's financial viability.
The Olympic Movement generates revenue through five major programmes.
The OCOGs have responsibility for domestic sponsorship, ticketing and licensing programmes, under the direction of the IOC. The Olympic Movement generated a total of more than US$4 billion (€2.5 billion) in revenue during the Olympic quadrennium from 2001 to 2004.
The IOC distributes some of its revenue to organisations throughout the Olympic Movement to support the staging of the Olympic Games and to promote worldwide sport development. The IOC retains approximately 10% of the Olympic marketing revenue for operational and administrative costs. For the 2013–2016 period, IOC had revenues of about US$5.0 billion, of which 73% were from broadcasting rights and 18% were from Olympic Partners. The Rio 2016 organising committee received US$1.5 billion and the Sochi 2014 organising committee received US$833 million. National Olympic committees and international federations received US$739 million each.
In July 2000, when the Los Angeles Times reported on how the IOC redistributes profits from sponsorships and broadcasting rights, historian Bob Barney stated that he had "yet to see matters of corruption in the IOC", but noted there were "matters of unaccountability". He later noted that when the spotlight is on the athletes, it has "the power to eclipse impressions of scandal or corruption", with respect to the Olympic bid process.
The IOC provides TOP programme contributions and broadcast revenue to the OCOGs to support the staging of the Olympic Games:
NOCs receive financial support for training and developing their Olympic teams, Olympic athletes, and Olympic hopefuls. The IOC distributes TOP programme revenue to each NOC. The IOC also contributes Olympic broadcast revenue to Olympic Solidarity, an IOC organisation that provides financial support to NOCs with the greatest need. The continued success of the TOP programme and Olympic broadcast agreements has enabled the IOC to provide increased support for the NOCs with each Olympic quadrennium. The IOC provided approximately US$318.5 million to NOCs for the 2001–2004 quadrennium.
The IOC is the largest single revenue source for the majority of IOSFs, with contributions that assist them in developing their respective sports. The IOC provides financial support to the 28 IOSFs of Olympic summer sports and the seven IOSFs of Olympic winter sports. The continually increasing value of Olympic broadcasts has enabled the IOC to substantially increase financial support to IOSFs with each successive Games. The seven winter sports IFs shared US$85.8 million, €75 million in Salt Lake 2002 broadcast revenue.
The IOC contributes Olympic marketing revenue to the programmes of various recognised international sports organisations, including the International Paralympic Committee (IPC), and the World Anti-Doping Agency (WADA).
The IOC requires cities bidding to host the Olympics to provide a comprehensive strategy to protect the environment in preparation for hosting, and following the conclusion of the Games.
The IOC has four major approaches to addressing environmental health concerns.
Host cities have concerns about traffic congestion and air pollution, both of which can compromise air quality during and after venue construction. Various air quality improvement measures are undertaken before and after each event. Traffic control is the primary method to reduce concentrations of air pollutants, including barring heavy vehicles.
Research at the Beijing Olympic Games identified particulate matter – measured in terms of PM10 (the amount of aerodynamic diameter of particle ≤ 10 μm in a given amount of air) – as a top priority. Particulate matter, along with other airborne pollutants, cause both serious health problems, such as asthma, and damage urban ecosystems. Black carbon is released into the air from incomplete combustion of carbonaceous fluids, contributing to climate change and injuring human health. Secondary pollutants such as CO, NOx, SO2, benzene, toluene, ethylbenzene, and xylenes (BTEX) are also released during construction.
For the Beijing Olympics, vehicles not meeting the Euro 1 emission standards were banned, and the odd-even rule was implemented in the Beijing administrative area. Air quality improvement measures implemented by the Beijing government included replacing coal with natural gas, suspending construction and/or imposing strict dust control on construction sites, closing or relocating the polluting industrial plants, building long subway lines, using cleaner fluid in power plants, and reducing the activity by some of the polluting factories. There, levels of primary and secondary pollutants were reduced, and good air quality was recorded during the Beijing Olympics on most days. Beijing also sprayed silver iodide in the atmosphere to induce rain to remove existing pollutants from the air.
Soil contamination can occur during construction. The Sydney Olympic Games of 2000 resulted in improving a highly contaminated area known as Homebush Bay. A pre-Games study reported soil metal concentrations high enough to potentially contaminate groundwater. A remediation strategy was developed. Contaminated soil was consolidated into four containment areas within the site, which left the remaining areas available for recreational use. The site contained waste materials that then no longer posed a threat to surrounding aquifers. In the 2006 Games in Torino, Italy, soil impacts were observed. Before the Games, researchers studied four areas that the Games would likely affect: a floodplain, a highway, the motorway connecting the city to Lyon, France, and a landfill. They analysed the chemicals in these areas before and after the Games. Their findings revealed an increase in the number of metals in the topsoil post-Games, and indicated that soil was capable of buffering the effects of many but not all heavy metals. Mercury, lead, and arsenic may have been transferred into the food chain.
One promise made to Londoners for the 2012 Olympic Games was that the Olympic Park would be a "blueprint for sustainable living." However, garden allotments were temporarily relocated due to the building of the Olympic stadium. The allotments were eventually returned, however, the soil quality was damaged. Further, allotment residents were exposed to radioactive waste for five months prior to moving, during the excavation of the site for the Games. Other local residents, construction workers, and onsite archaeologists faced similar exposures and risks.
The Olympic Games can affect water quality in several ways, including runoff and the transfer of polluting substances from the air to water sources through rainfall. Harmful particulates come from natural substances (such as plant matter crushed by higher volumes of pedestrian and vehicle traffic) and man-made substances (such as exhaust from vehicles or industry). Contaminants from these two categories elevate amounts of toxins in street dust. Street dust reaches water sources through runoff, facilitating the transfer of toxins to environments and communities that rely on these water sources.
In 2013, researchers in Beijing found a significant relationship between the amount of PM2.5 concentrations in the air and in rainfall. Studies showed that rainfall had transferred a large portion of these pollutants from the air to water sources. Notably, this cleared the air of such particulates, substantially improving air quality at the venues.
De Coubertin was influenced by the aristocratic ethos exemplified by English public schools. The public schools subscribed to the belief that sport formed an important part of education but that practicing or training was considered cheating. As class structure evolved through the 20th century, the definition of the amateur athlete as an aristocratic gentleman became outdated. The advent of the state-sponsored "full-time amateur athlete" of Eastern Bloc countries further eroded the notion of the pure amateur, as it put Western, self-financed amateurs at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession, but many of whom were paid by the state to train on a full-time basis. Nevertheless, the IOC held to the traditional rules regarding amateurism.
Near the end of the 1960s, the Canadian Amateur Hockey Association (CAHA) felt their amateur players could no longer be competitive against the Soviet full-time athletes and other constantly improving European teams. They pushed for the ability to use players from professional leagues, but met opposition from the IIHF and IOC. At the IIHF Congress in 1969, the IIHF decided to allow Canada to use nine non-NHL professional hockey players at the 1970 World Championships in Montreal and Winnipeg, Manitoba, Canada. The decision was reversed in January 1970 after Brundage declared that the change would put ice hockey's status as an Olympic sport in jeopardy. In response, Canada withdrew from international ice hockey competition and officials stated that they would not return until "open competition" was instituted.
Beginning in the 1970s, amateurism was gradually phased out of the Olympic Charter. After the 1988 Games, the IOC decided to make all professional athletes eligible for the Olympics, subject to the approval of the IFOSs.
The Games were originally awarded to Denver on 12 May 1970, but a rise in costs led to Colorado voters' rejection on 7 November 1972, by a 3 to 2 margin, of a $5 million bond issue to finance the Games with public funds.
Denver officially withdrew on 15 November, and the IOC then offered the Games to Whistler, British Columbia, Canada, but they too declined, owing to a change of government following elections.
Salt Lake City, Utah, a 1972 Winter Olympics final candidate who eventually hosted the 2002 Winter Olympics, offered itself as a potential host after Denver's withdrawal. The IOC declined Salt Lake City's offer and, on 5 February 1973, selected Innsbruck, the city that had hosted the Games twelve years earlier.
A scandal broke on 10 December 1998, when Swiss IOC member Marc Hodler, head of the coordination committee overseeing the organisation of the 2002 Games, announced that several members of the IOC had received gifts from members of the Salt Lake City 2002 bid Committee in exchange for votes. Soon four independent investigations were underway: by the IOC, the United States Olympic Committee (USOC), the SLOC, and the United States Department of Justice. Before any of the investigations could get under way, SLOC co-heads Tom Welch and David Johnson both resigned their posts. Many others soon followed. The Department of Justice filed fifteen counts of bribery and fraud against the pair.
As a result of the investigation, ten IOC members were expelled and another ten were sanctioned. Stricter rules were adopted for future bids, and caps were put into place as to how much IOC members could accept from bid cities. Additionally, new term and age limits were put into place for IOC membership, an Athlete's Commission was created and fifteen former Olympic athletes gained provisional membership status.
In 2000, international human rights groups attempted to pressure the IOC to reject Beijing's bid to protest human rights in the People's Republic of China. One Chinese dissident was sentenced to two years in prison during an IOC tour. After the city won the 2008 Summer Olympic Games, Amnesty International and others expressed concerns regarding the human rights situation. The second principle in the Fundamental Principles of Olympism, Olympic Charter states that "The goal of Olympism is to place sport at the service of the harmonious development of man, with a view to promoting a peaceful society concerned with the preservation of human dignity." Amnesty International considered PRC policies and practices as violating that principle.
Some days before the Opening Ceremonies, in August 2008, the IOC issued DMCA take down notices on Tibetan Protests videos on YouTube. YouTube and the Electronic Frontier Foundation (EFF) pushed back against the IOC, which then withdrew their complaint.
On 1 March 2016, Owen Gibson of The Guardian reported that French financial prosecutors investigating corruption in world athletics had expanded their remit to include the bidding and voting processes for the 2016 Summer Olympics and 2020 Summer Olympics. The story followed an earlier report in January by Gibson, who revealed that Papa Massata Diack, the son of then-IAAF president Lamine Diack, appeared to arrange for "parcels" to be delivered to six IOC members in 2008 when Qatar was bidding for the 2016 Summer Olympic Games, though it failed to make it beyond the shortlist. Weeks later, Qatari authorities denied the allegations. Gibson then reported that a €1.3m (£1m, $1.5m) payment from the Tokyo Olympic Committee team to an account linked to Papa Diack was made during Japan's successful race to host the 2020 Summer Games. The following day, French prosecutors confirmed they were investigating allegations of "corruption and money laundering" of more than $2m in suspicious payments made by the Tokyo 2020 Olympic bid committee to a secret bank account linked to Diack. Tsunekazu Takeda of the Tokyo 2020 bid committee responded on 17 May 2016, denying allegations of wrongdoing, and refused to reveal transfer details. The controversy was reignited on 11 January 2019 after it emerged Takeda had been indicted on corruption charges in France over his role in the bid process.
In 2014, at the final stages of the bid process for 2022, Oslo, seen as the favourite, surprised with a withdrawal. Following a string of local controversies over the masterplan, local officials were outraged by IOC demands on athletes and the Olympic family. In addition, allegations about lavish treatment of stakeholders, including separate lanes to "be created on all roads where IOC members will travel, which are not to be used by regular people or public transportation", exclusive cars and drivers for IOC members. The differential treatment irritated Norwegians. The IOC demanded "control over all advertising space throughout Oslo and the subsites during the Games, to be used exclusively by official sponsors."
Human rights groups and governments criticised the committee for allowing Beijing to bid for the 2022 Winter Olympics. Some weeks before the Opening Ceremonies, the Xinjiang Papers were released, documenting abuses by the Chinese government against the Uyghur population in Xinjiang, documenting what many governments described as genocide.
Many government officials, notably those in the United States and the Great Britain, called for a boycott of the 2022 winter games. The IOC responded to concerns by saying that the Olympic Games must not be politicized. Some Nations, including the United States, diplomatically boycotted games, which prohibited a diplomatic delegation from representing a nation at the games, rather than a full boycott that would have barred athletes from competing. In September 2021, the IOC suspended the Olympic Committee of the Democratic People's Republic of Korea, after they boycotted the 2020 Summer Olympics claiming "COVID-19 Concerns".
On 14 October 2021, vice-president of the IOC, John Coates, announced that the IOC had no plans to challenge the Chinese government on humanitarian issues, stating that the issues were "not within the IOC's remit".
In December 2021, the United States House of Representatives voted unanimously for a resolution stating that the IOC had violated its own human rights commitments by cooperating with the Chinese government. In January 2022, members of the U.S. House of Representatives unsuccessfully attempted to pass legislation to strip the IOC of its tax exemption status in the United States.
The IOC uses Sex verification to ensure participants compete only in events matching their sex. Verifying the sex of Olympic participants dates back to ancient Greece when Kallipateira attempted to break Greek law by dressing as a man to enter the arena as a trainer. After she was discovered, a policy was erected wherein trainers, just as athletes, were made to appear naked in order to better assure all were male. In more recent history, sex verification has taken many forms and been subject to dispute. Before sex testing, Olympic officials relied on "nude parades" and doctor's notes. Successful women athletes perceived to be masculine were most likely to be inspected. In 1966, IOC implemented a compulsory sex verification process that took effect at the 1968 Winter Olympics where a lottery system was used to determine who would be inspected with a Barr body test. The scientific community found fault with this policy. The use of the Barr body test was evaluated by fifteen geneticists who unanimously agreed it was scientifically invalid. By the 1970s this method was replaced with PCR testing, as well as evaluating factors such as brain anatomy and behaviour. Following continued backlash against mandatory sex testing, the IOC's Athletes' Commission's opposition ended of the practice in 1999. Although sex testing was no longer mandated, women who did not present as feminine continued to be inspected based on suspicion. This started at 2000 Summer Olympics and remained in use until the 2010 Winter Olympics. By 2011 the IOC created a Hyperandrogenism Regulation, which aimed to standardize natural testosterone levels in women athletes. This transition in sex testing was to assure fairness within female events. This was due to the belief that higher testosterone levels increased athletic ability and gave unfair advantages to intersex and transgender competitors. Any female athlete flagged for suspicion and whose testosterone surpassed regulation levels was prohibited from competing until medical treatment brought their hormone levels within standard levels. It has been argued by press, scholars, and politicians that some ethnicities are disproportionately impacted by this regulation and that the rule excludes too many. The most notable cases of bans testing results are: Maria José Martínez-Patiño (1985), Santhi Soundarajan (2006), Caster Semenya (2009), Annet Negesa (2012), and Dutee Chand (2014).
Before the 2014 Asian Games, Indian athlete Dutee Chand was banned from competing internationally having been found to be in violation of the Hyperandrogenism Regulation. Following the denial of her appeal by the Court of Arbitration for Sport, the IOC suspended the policy for the 2016 Summer Olympics and 2018 Winter Olympics.
Eight years after the 1998 Winter Olympics, a report ordered by the Nagano region's governor said the Japanese city provided millions of dollars in an "illegitimate and excessive level of hospitality" to IOC members, including US$4.4 million spent on entertainment. Earlier reports put the figure at approximately US$14 million. The precise figures are unknown: after the IOC asked that the entertainment expenditures not be made public Nagano destroyed its financial records.
In 2010, the IOC was nominated for the Public Eye Awards. This award seeks to present "shame-on-you-awards to the nastiest corporate players of the year".
Before the start of the 2012 Summer Olympic Games, the IOC decided not to hold a minute of silence to honour the 11 Israeli Olympians who were killed 40 years prior in the Munich massacre. Jacques Rogge, the then-IOC President, said it would be "inappropriate" to do so. Speaking of the decision, Israeli Olympian Shaul Ladany, who had survived the Munich Massacre, commented: "I do not understand. I do not understand, and I do not accept it".
In February 2013, the IOC excluded wrestling from its core Olympic sports for the Summer Olympic programme for the 2020 Summer Olympics, because the sport did offer equal opportunities for men and women. This decision was attacked by the sporting community, given the sport's long traditions. This decision was later overturned, after a reassessment. Later, the sport was placed among the core Olympic sports, which it will hold until at least 2032.
Media attention began growing in December 2014 when German broadcaster ARD reported on state-sponsored doping in Russia, comparing it to doping in East Germany. In November 2015, the World Anti-Doping Agency (WADA) published a report and the World Athletics (then known as the IAAF) suspended Russia indefinitely from world track and field events. The United Kingdom Anti-Doping agency later assisted WADA with testing in Russia. In June 2016, they reported that they were unable to fully carry out their work and noted intimidation by armed Federal Security Service (FSB) agents. After a Russian former lab director made allegations about the 2014 Winter Olympics in Sochi, WADA commissioned an independent investigation led by Richard McLaren. McLaren's investigation found corroborating evidence, concluding in a report published in July 2016 that the Ministry of Sport and the FSB had operated a "state-directed failsafe system" using a "disappearing positive [test] methodology" (DPM) from "at least late 2011 to August 2015".
In response to these findings, WADA announced that RUSADA should be regarded as non-compliant with respect to the World Anti-Doping Code and recommended that Russia be banned from competing at the 2016 Summer Olympics. The IOC rejected the recommendation, stating that a separate decision would be made for each athlete by the relevant IF and the IOC, based on the athlete's individual circumstances. One day prior to the opening ceremony, 270 athletes were cleared to compete under the Russian flag, while 167 were removed because of doping. In contrast, the entire Kuwaiti team was banned from competing under their own flag (for a non-doping related matter).
In contrast to the IOC, the IPC voted unanimously to ban the entire Russian team from the 2016 Summer Paralympics, having found evidence that the DPM was also in operation at the 2014 Winter Paralympics.
On 5 December 2017, the IOC announced that the Russian Olympic Committee had been suspended effective immediately from the 2018 Winter Olympics. Athletes who had no previous drug violations and a consistent history of drug testing were allowed to compete under the Olympic Flag as an "Olympic Athlete from Russia" (OAR). Under the terms of the decree, Russian government officials were barred from the Games, and neither the country's flag nor anthem would be present. The Olympic Flag and Olympic Anthem would be used instead, and on 20 December 2017 the IOC proposed an alternate uniform logo.
On 1 February 2018, the Court of Arbitration for Sport (CAS) found that the IOC provided insufficient evidence for 28 athletes, and overturned their IOC sanctions. For 11 other athletes, the CAS decided that there was sufficient evidence to uphold their Sochi sanctions, but reduced their lifetime bans to only the 2018 Winter Olympics. The IOC said in a statement that "the result of the CAS decision does not mean that athletes from the group of 28 will be invited to the Games. Not being sanctioned does not automatically confer the privilege of an invitation" and that "this [case] may have a serious impact on the future fight against doping". The IOC found it important to note that the CAS Secretary General "insisted that the CAS decision does not mean that these 28 athletes are innocent" and that they would consider an appeal against the court's decision. Later that month, the Russian Olympic Committee was reinstated by the IOC, despite numerous failed drug tests by Russian athletes in the 2018 Olympics. The Russian Anti-Doping Agency was re-certified in September, despite the Russian rejection of the McLaren Report.
On 24 November 2018, the Taiwanese government held a referendum over a change in the naming of their National Olympic Committee, from "Chinese Taipei," a name agreed to in 1981 by the People's Republic of China in the Nagoya Protocol, which denies the Republic of China's legitimacy, to simply "Taiwan", after the main island in the Free Area. In the immediate days prior to the referendum, the IOC and the PRC government, issued a threatening statement, suggesting that if the team underwent the name change, the IOC had the legal right to make a "suspension of or forced withdrawal," of the team from the 2020 Summer Olympics. In response to the allegations of election interference, the IOC stated, "The IOC does not interfere with local procedures and fully respects freedom of expression. However, to avoid any unnecessary expectations or speculations, the IOC wishes to reiterate that this matter is under its jurisdiction." Subsequently, with a significant PRC pressure, the referendum failed in Taiwan with 45% to 54%.
In November 2021, the IOC was again criticized by Human Rights Watch (HRW) and others for its response to the 2021 disappearance of Peng Shuai, following her publishing of sexual assault allegations against a former Chinese vice premier, and high-ranking member of the Chinese Communist Party, Zhang Gaoli. The IOC's response was internationally criticized as complicit in assisting the Chinese government to silence Peng's sexual assault allegations. Zhang Gaoli previously led the Beijing bidding committee to host the 2022 Winter Olympics.
In July 2020 (and reconfirmed by FIE public notice in September 2020 and in January 2021), by public written notice the FIE had replaced its previous handshake requirement with a "salute" by the opposing fencers, and written in its public notice that handshakes were "suspended until further notice." Nevertheless, in July 2023 when Ukrainian four-time world fencing individual sabre champion Olga Kharlan was disqualified at the World Fencing Championships by the Fédération Internationale d'Escrime for not shaking the hand of her defeated Russian opponent, although Kharlan instead offered a tapping of blades in acknowledgement, Thomas Bach stepped in the next day. As President of the IOC, he sent a letter to Kharlan in which he expressed empathy for her, and wrote that in light of the situation she was guaranteed a spot in the 2024 Summer Olympics. He wrote further: "as a fellow fencer, it is impossible for me to imagine how you feel at this moment. The war against your country, the suffering of the people in Ukraine, the uncertainty around your participation at the Fencing World Championships ... and then the events which unfolded yesterday – all this is a roller coaster of emotions and feelings. It is admirable how you are managing this incredibly difficult situation, and I would like to express my full support to you. Rest assured that the IOC will continue to stand in full solidarity with the Ukrainian athletes and the Olympic community of Ukraine."
The Olympic Partner (TOP) sponsorship programme includes the following commercial sponsors of the Olympic Games.
46°31′5″N 6°35′49″E / 46.51806°N 6.59694°E / 46.51806; 6.59694 | [
{
"paragraph_id": 0,
"text": "The International Olympic Committee (IOC; French: Comité international olympique, CIO) is a non-governmental sports organisation based in Lausanne, Switzerland.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It was founded in 1894 by Pierre de Coubertin and Demetrios Vikelas, it is the authority responsible for organising the modern (Summer, Winter, and Youth) Olympic Games.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The IOC is the governing body of the National Olympic Committees (NOCs) and of the worldwide Olympic Movement, the IOC's term for all entities and individuals involved in the Olympic Games. As of 2020, 206 NOCs officially were recognised by the IOC. Its president is Thomas Bach.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Its stated mission is to promote Olympism throughout the world and to lead the Olympic Movement:",
"title": "Mission"
},
{
"paragraph_id": 4,
"text": "All IOC members must swear to the following:",
"title": "IOC member oath"
},
{
"paragraph_id": 5,
"text": "\"Honoured to be chosen as a member of the International Olympic Committee, I fully accept all the responsibilities that this office brings: I promise to serve the Olympic Movement to the best of my ability. I will respect the Olympic Charter and accept the decisions of the IOC. I will always act independently of commercial and political interests as well as of any racial or religious consideration. I will fully comply with the IOC Code of Ethics. I promise to fight against all forms of discrimination and dedicate myself in all circumstances to promote the interests of the International Olympic Committee and Olympic Movement.\"",
"title": "IOC member oath"
},
{
"paragraph_id": 6,
"text": "The IOC was created by Pierre de Coubertin, on 23 June 1894 with Demetrios Vikelas as its first president. As of February 2022, its membership consists of 105 active members and 45 honorary members. The IOC is the supreme authority of the worldwide modern Olympic Movement.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The IOC organizes the modern Olympic Games and Youth Olympic Games (YOG), held in summer and winter every four years. The first Summer Olympics was held in Athens, Greece, in 1896; the first Winter Olympics was in Chamonix, France, in 1924. The first Summer YOG was in Singapore in 2010, and the first Winter YOG was in Innsbruck in 2012.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Until 1992, both Summer and Winter Olympics were held in the same year. After that year, however, the IOC shifted the Winter Olympics to the even years between Summer Games to help space the planning of the two events from one another, and to improve the financial balance of the IOC, which receives a proportionally greater income in Olympic years.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Since 1995, the IOC has worked to address environmental health concerns resulting from hosting the games. In 1995, IOC President Juan Antonio Samaranch stated, \"the International Olympic Committee is resolved to ensure that the environment becomes the third dimension of the organization of the Olympic Games, the first and second being sport and culture.\" Acting on this statement, in 1996 the IOC added the \"environment\" as a third pillar to its vision for the Olympic Games.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 2000, the \"Green Olympics\" effort was developed by the Beijing Organizing Committee for the Beijing Olympic Games. The Beijing 2008 Summer Olympics executed over 160 projects addressing the goals of improved air quality and water quality, sustainable energy, improved waste management, and environmental education. These projects included industrial plant relocation or closure, furnace replacement, introduction of new emission standards, and more strict traffic control.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 2009, the UN General Assembly granted the IOC Permanent Observer status. The decision enables the IOC to be directly involved in the UN Agenda and to attend UN General Assembly meetings where it can take the floor. In 1993, the General Assembly approved a Resolution to further solidify IOC–UN cooperation by reviving the Olympic Truce.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The IOC received approval in November 2015 to construct a new headquarters in Vidy, Lausanne. The cost of the project was estimated to stand at $156m. The IOC announced on the 11th of February 2019 that the \"Olympic House\" would be inaugurated on the 23rd of June 2019 to coincide with its 125th anniversary. The Olympic Museum remains in Ouchy, Lausanne.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Since 2002, the IOC has been involved in several high-profile controversies including taking gifts, its DMCA take down request of the 2008 Tibetan protest videos, Russian doping scandals, and its support of the Beijing 2022 Winter Olympics despite China's human rights violations documented in the Xinjiang Papers.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Detailed frameworks for environmental sustainability were prepared for the 2018 Winter Olympics, and 2020 Summer Olympics in PyeongChang, South Korea, and Tokyo.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "It is an association under the Swiss Civil Code (articles 60–79).",
"title": "Organization"
},
{
"paragraph_id": 16,
"text": "The IOC Session is the general meeting of the members of the IOC, held once a year in which each member has one vote. It is the IOC's supreme organ and its decisions are final.",
"title": "Organization"
},
{
"paragraph_id": 17,
"text": "Extraordinary Sessions may be convened by the President or upon the written request of at least one third of the members.",
"title": "Organization"
},
{
"paragraph_id": 18,
"text": "Among others, the powers of the Session are:",
"title": "Organization"
},
{
"paragraph_id": 19,
"text": "For most of its existence the IOC was controlled by members who were selected by other members. Countries that had hosted the Games were allowed two members. When named they became IOC members in their respective countries rather than representatives of their respective countries to the IOC.",
"title": "IOC members"
},
{
"paragraph_id": 20,
"text": "Membership ends under the following circumstances:",
"title": "IOC members"
},
{
"paragraph_id": 21,
"text": "IOC recognises 82 international sports federations (IFs):",
"title": "IOC members"
},
{
"paragraph_id": 22,
"text": "IOC awards gold, silver, and bronze medals for the top three competitors in each sporting event.",
"title": "Honours"
},
{
"paragraph_id": 23,
"text": "Other honours.",
"title": "Honours"
},
{
"paragraph_id": 24,
"text": "During the first half of the 20th century the IOC ran on a small budget. As IOC president from 1952 to 1972, Avery Brundage rejected all attempts to link the Olympics with commercial interests. Brundage believed that corporate interests would unduly impact the IOC's decision-making. Brundage's resistance to this revenue stream left IOC organising committees to negotiate their own sponsorship contracts and use the Olympic symbols.",
"title": "Olympic marketing"
},
{
"paragraph_id": 25,
"text": "When Brundage retired the IOC had US$2 million in assets; eight years later coffers had swollen, to US$45 million. This was primarily due to a shift in ideology toward expansion of the Games through corporate sponsorship and the sale of television rights. When Juan Antonio Samaranch was elected IOC president in 1980 his desire was to make the IOC financially independent. Samaranch appointed Canadian IOC member Richard Pound to lead the initiative as Chairman of the \"New Sources of Finance Commission\".",
"title": "Olympic marketing"
},
{
"paragraph_id": 26,
"text": "In 1982 the IOC drafted International Sport and Leisure, a Swiss sports marketing company, to develop a global marketing programme for the Olympic Movement. ISL developed the programme, but was replaced by Meridian Management, a company partly owned by the IOC in the early 1990s. In 1989, a staff member at ISL Marketing, Michael Payne, moved to the IOC and became the organisation's first marketing director. ISL and then Meridian continued in the established role as the IOC's sales and marketing agents until 2002. In collaboration with ISL Marketing and Meridian Management, Payne made major contributions to the creation of a multibillion-dollar sponsorship marketing programme for the organisation which, along with improvements in TV marketing and improved financial management, helped to restore the IOC's financial viability.",
"title": "Olympic marketing"
},
{
"paragraph_id": 27,
"text": "The Olympic Movement generates revenue through five major programmes.",
"title": "Olympic marketing"
},
{
"paragraph_id": 28,
"text": "The OCOGs have responsibility for domestic sponsorship, ticketing and licensing programmes, under the direction of the IOC. The Olympic Movement generated a total of more than US$4 billion (€2.5 billion) in revenue during the Olympic quadrennium from 2001 to 2004.",
"title": "Olympic marketing"
},
{
"paragraph_id": 29,
"text": "The IOC distributes some of its revenue to organisations throughout the Olympic Movement to support the staging of the Olympic Games and to promote worldwide sport development. The IOC retains approximately 10% of the Olympic marketing revenue for operational and administrative costs. For the 2013–2016 period, IOC had revenues of about US$5.0 billion, of which 73% were from broadcasting rights and 18% were from Olympic Partners. The Rio 2016 organising committee received US$1.5 billion and the Sochi 2014 organising committee received US$833 million. National Olympic committees and international federations received US$739 million each.",
"title": "Olympic marketing"
},
{
"paragraph_id": 30,
"text": "In July 2000, when the Los Angeles Times reported on how the IOC redistributes profits from sponsorships and broadcasting rights, historian Bob Barney stated that he had \"yet to see matters of corruption in the IOC\", but noted there were \"matters of unaccountability\". He later noted that when the spotlight is on the athletes, it has \"the power to eclipse impressions of scandal or corruption\", with respect to the Olympic bid process.",
"title": "Olympic marketing"
},
{
"paragraph_id": 31,
"text": "The IOC provides TOP programme contributions and broadcast revenue to the OCOGs to support the staging of the Olympic Games:",
"title": "Olympic marketing"
},
{
"paragraph_id": 32,
"text": "NOCs receive financial support for training and developing their Olympic teams, Olympic athletes, and Olympic hopefuls. The IOC distributes TOP programme revenue to each NOC. The IOC also contributes Olympic broadcast revenue to Olympic Solidarity, an IOC organisation that provides financial support to NOCs with the greatest need. The continued success of the TOP programme and Olympic broadcast agreements has enabled the IOC to provide increased support for the NOCs with each Olympic quadrennium. The IOC provided approximately US$318.5 million to NOCs for the 2001–2004 quadrennium.",
"title": "Olympic marketing"
},
{
"paragraph_id": 33,
"text": "The IOC is the largest single revenue source for the majority of IOSFs, with contributions that assist them in developing their respective sports. The IOC provides financial support to the 28 IOSFs of Olympic summer sports and the seven IOSFs of Olympic winter sports. The continually increasing value of Olympic broadcasts has enabled the IOC to substantially increase financial support to IOSFs with each successive Games. The seven winter sports IFs shared US$85.8 million, €75 million in Salt Lake 2002 broadcast revenue.",
"title": "Olympic marketing"
},
{
"paragraph_id": 34,
"text": "The IOC contributes Olympic marketing revenue to the programmes of various recognised international sports organisations, including the International Paralympic Committee (IPC), and the World Anti-Doping Agency (WADA).",
"title": "Olympic marketing"
},
{
"paragraph_id": 35,
"text": "The IOC requires cities bidding to host the Olympics to provide a comprehensive strategy to protect the environment in preparation for hosting, and following the conclusion of the Games.",
"title": "Environmental concerns"
},
{
"paragraph_id": 36,
"text": "The IOC has four major approaches to addressing environmental health concerns.",
"title": "Environmental concerns"
},
{
"paragraph_id": 37,
"text": "Host cities have concerns about traffic congestion and air pollution, both of which can compromise air quality during and after venue construction. Various air quality improvement measures are undertaken before and after each event. Traffic control is the primary method to reduce concentrations of air pollutants, including barring heavy vehicles.",
"title": "Environmental concerns"
},
{
"paragraph_id": 38,
"text": "Research at the Beijing Olympic Games identified particulate matter – measured in terms of PM10 (the amount of aerodynamic diameter of particle ≤ 10 μm in a given amount of air) – as a top priority. Particulate matter, along with other airborne pollutants, cause both serious health problems, such as asthma, and damage urban ecosystems. Black carbon is released into the air from incomplete combustion of carbonaceous fluids, contributing to climate change and injuring human health. Secondary pollutants such as CO, NOx, SO2, benzene, toluene, ethylbenzene, and xylenes (BTEX) are also released during construction.",
"title": "Environmental concerns"
},
{
"paragraph_id": 39,
"text": "For the Beijing Olympics, vehicles not meeting the Euro 1 emission standards were banned, and the odd-even rule was implemented in the Beijing administrative area. Air quality improvement measures implemented by the Beijing government included replacing coal with natural gas, suspending construction and/or imposing strict dust control on construction sites, closing or relocating the polluting industrial plants, building long subway lines, using cleaner fluid in power plants, and reducing the activity by some of the polluting factories. There, levels of primary and secondary pollutants were reduced, and good air quality was recorded during the Beijing Olympics on most days. Beijing also sprayed silver iodide in the atmosphere to induce rain to remove existing pollutants from the air.",
"title": "Environmental concerns"
},
{
"paragraph_id": 40,
"text": "Soil contamination can occur during construction. The Sydney Olympic Games of 2000 resulted in improving a highly contaminated area known as Homebush Bay. A pre-Games study reported soil metal concentrations high enough to potentially contaminate groundwater. A remediation strategy was developed. Contaminated soil was consolidated into four containment areas within the site, which left the remaining areas available for recreational use. The site contained waste materials that then no longer posed a threat to surrounding aquifers. In the 2006 Games in Torino, Italy, soil impacts were observed. Before the Games, researchers studied four areas that the Games would likely affect: a floodplain, a highway, the motorway connecting the city to Lyon, France, and a landfill. They analysed the chemicals in these areas before and after the Games. Their findings revealed an increase in the number of metals in the topsoil post-Games, and indicated that soil was capable of buffering the effects of many but not all heavy metals. Mercury, lead, and arsenic may have been transferred into the food chain.",
"title": "Environmental concerns"
},
{
"paragraph_id": 41,
"text": "One promise made to Londoners for the 2012 Olympic Games was that the Olympic Park would be a \"blueprint for sustainable living.\" However, garden allotments were temporarily relocated due to the building of the Olympic stadium. The allotments were eventually returned, however, the soil quality was damaged. Further, allotment residents were exposed to radioactive waste for five months prior to moving, during the excavation of the site for the Games. Other local residents, construction workers, and onsite archaeologists faced similar exposures and risks.",
"title": "Environmental concerns"
},
{
"paragraph_id": 42,
"text": "The Olympic Games can affect water quality in several ways, including runoff and the transfer of polluting substances from the air to water sources through rainfall. Harmful particulates come from natural substances (such as plant matter crushed by higher volumes of pedestrian and vehicle traffic) and man-made substances (such as exhaust from vehicles or industry). Contaminants from these two categories elevate amounts of toxins in street dust. Street dust reaches water sources through runoff, facilitating the transfer of toxins to environments and communities that rely on these water sources.",
"title": "Environmental concerns"
},
{
"paragraph_id": 43,
"text": "In 2013, researchers in Beijing found a significant relationship between the amount of PM2.5 concentrations in the air and in rainfall. Studies showed that rainfall had transferred a large portion of these pollutants from the air to water sources. Notably, this cleared the air of such particulates, substantially improving air quality at the venues.",
"title": "Environmental concerns"
},
{
"paragraph_id": 44,
"text": "De Coubertin was influenced by the aristocratic ethos exemplified by English public schools. The public schools subscribed to the belief that sport formed an important part of education but that practicing or training was considered cheating. As class structure evolved through the 20th century, the definition of the amateur athlete as an aristocratic gentleman became outdated. The advent of the state-sponsored \"full-time amateur athlete\" of Eastern Bloc countries further eroded the notion of the pure amateur, as it put Western, self-financed amateurs at a disadvantage. The Soviet Union entered teams of athletes who were all nominally students, soldiers, or working in a profession, but many of whom were paid by the state to train on a full-time basis. Nevertheless, the IOC held to the traditional rules regarding amateurism.",
"title": "Controversies"
},
{
"paragraph_id": 45,
"text": "Near the end of the 1960s, the Canadian Amateur Hockey Association (CAHA) felt their amateur players could no longer be competitive against the Soviet full-time athletes and other constantly improving European teams. They pushed for the ability to use players from professional leagues, but met opposition from the IIHF and IOC. At the IIHF Congress in 1969, the IIHF decided to allow Canada to use nine non-NHL professional hockey players at the 1970 World Championships in Montreal and Winnipeg, Manitoba, Canada. The decision was reversed in January 1970 after Brundage declared that the change would put ice hockey's status as an Olympic sport in jeopardy. In response, Canada withdrew from international ice hockey competition and officials stated that they would not return until \"open competition\" was instituted.",
"title": "Controversies"
},
{
"paragraph_id": 46,
"text": "Beginning in the 1970s, amateurism was gradually phased out of the Olympic Charter. After the 1988 Games, the IOC decided to make all professional athletes eligible for the Olympics, subject to the approval of the IFOSs.",
"title": "Controversies"
},
{
"paragraph_id": 47,
"text": "The Games were originally awarded to Denver on 12 May 1970, but a rise in costs led to Colorado voters' rejection on 7 November 1972, by a 3 to 2 margin, of a $5 million bond issue to finance the Games with public funds.",
"title": "Controversies"
},
{
"paragraph_id": 48,
"text": "Denver officially withdrew on 15 November, and the IOC then offered the Games to Whistler, British Columbia, Canada, but they too declined, owing to a change of government following elections.",
"title": "Controversies"
},
{
"paragraph_id": 49,
"text": "Salt Lake City, Utah, a 1972 Winter Olympics final candidate who eventually hosted the 2002 Winter Olympics, offered itself as a potential host after Denver's withdrawal. The IOC declined Salt Lake City's offer and, on 5 February 1973, selected Innsbruck, the city that had hosted the Games twelve years earlier.",
"title": "Controversies"
},
{
"paragraph_id": 50,
"text": "A scandal broke on 10 December 1998, when Swiss IOC member Marc Hodler, head of the coordination committee overseeing the organisation of the 2002 Games, announced that several members of the IOC had received gifts from members of the Salt Lake City 2002 bid Committee in exchange for votes. Soon four independent investigations were underway: by the IOC, the United States Olympic Committee (USOC), the SLOC, and the United States Department of Justice. Before any of the investigations could get under way, SLOC co-heads Tom Welch and David Johnson both resigned their posts. Many others soon followed. The Department of Justice filed fifteen counts of bribery and fraud against the pair.",
"title": "Controversies"
},
{
"paragraph_id": 51,
"text": "As a result of the investigation, ten IOC members were expelled and another ten were sanctioned. Stricter rules were adopted for future bids, and caps were put into place as to how much IOC members could accept from bid cities. Additionally, new term and age limits were put into place for IOC membership, an Athlete's Commission was created and fifteen former Olympic athletes gained provisional membership status.",
"title": "Controversies"
},
{
"paragraph_id": 52,
"text": "In 2000, international human rights groups attempted to pressure the IOC to reject Beijing's bid to protest human rights in the People's Republic of China. One Chinese dissident was sentenced to two years in prison during an IOC tour. After the city won the 2008 Summer Olympic Games, Amnesty International and others expressed concerns regarding the human rights situation. The second principle in the Fundamental Principles of Olympism, Olympic Charter states that \"The goal of Olympism is to place sport at the service of the harmonious development of man, with a view to promoting a peaceful society concerned with the preservation of human dignity.\" Amnesty International considered PRC policies and practices as violating that principle.",
"title": "Controversies"
},
{
"paragraph_id": 53,
"text": "Some days before the Opening Ceremonies, in August 2008, the IOC issued DMCA take down notices on Tibetan Protests videos on YouTube. YouTube and the Electronic Frontier Foundation (EFF) pushed back against the IOC, which then withdrew their complaint.",
"title": "Controversies"
},
{
"paragraph_id": 54,
"text": "On 1 March 2016, Owen Gibson of The Guardian reported that French financial prosecutors investigating corruption in world athletics had expanded their remit to include the bidding and voting processes for the 2016 Summer Olympics and 2020 Summer Olympics. The story followed an earlier report in January by Gibson, who revealed that Papa Massata Diack, the son of then-IAAF president Lamine Diack, appeared to arrange for \"parcels\" to be delivered to six IOC members in 2008 when Qatar was bidding for the 2016 Summer Olympic Games, though it failed to make it beyond the shortlist. Weeks later, Qatari authorities denied the allegations. Gibson then reported that a €1.3m (£1m, $1.5m) payment from the Tokyo Olympic Committee team to an account linked to Papa Diack was made during Japan's successful race to host the 2020 Summer Games. The following day, French prosecutors confirmed they were investigating allegations of \"corruption and money laundering\" of more than $2m in suspicious payments made by the Tokyo 2020 Olympic bid committee to a secret bank account linked to Diack. Tsunekazu Takeda of the Tokyo 2020 bid committee responded on 17 May 2016, denying allegations of wrongdoing, and refused to reveal transfer details. The controversy was reignited on 11 January 2019 after it emerged Takeda had been indicted on corruption charges in France over his role in the bid process.",
"title": "Controversies"
},
{
"paragraph_id": 55,
"text": "In 2014, at the final stages of the bid process for 2022, Oslo, seen as the favourite, surprised with a withdrawal. Following a string of local controversies over the masterplan, local officials were outraged by IOC demands on athletes and the Olympic family. In addition, allegations about lavish treatment of stakeholders, including separate lanes to \"be created on all roads where IOC members will travel, which are not to be used by regular people or public transportation\", exclusive cars and drivers for IOC members. The differential treatment irritated Norwegians. The IOC demanded \"control over all advertising space throughout Oslo and the subsites during the Games, to be used exclusively by official sponsors.\"",
"title": "Controversies"
},
{
"paragraph_id": 56,
"text": "Human rights groups and governments criticised the committee for allowing Beijing to bid for the 2022 Winter Olympics. Some weeks before the Opening Ceremonies, the Xinjiang Papers were released, documenting abuses by the Chinese government against the Uyghur population in Xinjiang, documenting what many governments described as genocide.",
"title": "Controversies"
},
{
"paragraph_id": 57,
"text": "Many government officials, notably those in the United States and the Great Britain, called for a boycott of the 2022 winter games. The IOC responded to concerns by saying that the Olympic Games must not be politicized. Some Nations, including the United States, diplomatically boycotted games, which prohibited a diplomatic delegation from representing a nation at the games, rather than a full boycott that would have barred athletes from competing. In September 2021, the IOC suspended the Olympic Committee of the Democratic People's Republic of Korea, after they boycotted the 2020 Summer Olympics claiming \"COVID-19 Concerns\".",
"title": "Controversies"
},
{
"paragraph_id": 58,
"text": "On 14 October 2021, vice-president of the IOC, John Coates, announced that the IOC had no plans to challenge the Chinese government on humanitarian issues, stating that the issues were \"not within the IOC's remit\".",
"title": "Controversies"
},
{
"paragraph_id": 59,
"text": "In December 2021, the United States House of Representatives voted unanimously for a resolution stating that the IOC had violated its own human rights commitments by cooperating with the Chinese government. In January 2022, members of the U.S. House of Representatives unsuccessfully attempted to pass legislation to strip the IOC of its tax exemption status in the United States.",
"title": "Controversies"
},
{
"paragraph_id": 60,
"text": "The IOC uses Sex verification to ensure participants compete only in events matching their sex. Verifying the sex of Olympic participants dates back to ancient Greece when Kallipateira attempted to break Greek law by dressing as a man to enter the arena as a trainer. After she was discovered, a policy was erected wherein trainers, just as athletes, were made to appear naked in order to better assure all were male. In more recent history, sex verification has taken many forms and been subject to dispute. Before sex testing, Olympic officials relied on \"nude parades\" and doctor's notes. Successful women athletes perceived to be masculine were most likely to be inspected. In 1966, IOC implemented a compulsory sex verification process that took effect at the 1968 Winter Olympics where a lottery system was used to determine who would be inspected with a Barr body test. The scientific community found fault with this policy. The use of the Barr body test was evaluated by fifteen geneticists who unanimously agreed it was scientifically invalid. By the 1970s this method was replaced with PCR testing, as well as evaluating factors such as brain anatomy and behaviour. Following continued backlash against mandatory sex testing, the IOC's Athletes' Commission's opposition ended of the practice in 1999. Although sex testing was no longer mandated, women who did not present as feminine continued to be inspected based on suspicion. This started at 2000 Summer Olympics and remained in use until the 2010 Winter Olympics. By 2011 the IOC created a Hyperandrogenism Regulation, which aimed to standardize natural testosterone levels in women athletes. This transition in sex testing was to assure fairness within female events. This was due to the belief that higher testosterone levels increased athletic ability and gave unfair advantages to intersex and transgender competitors. Any female athlete flagged for suspicion and whose testosterone surpassed regulation levels was prohibited from competing until medical treatment brought their hormone levels within standard levels. It has been argued by press, scholars, and politicians that some ethnicities are disproportionately impacted by this regulation and that the rule excludes too many. The most notable cases of bans testing results are: Maria José Martínez-Patiño (1985), Santhi Soundarajan (2006), Caster Semenya (2009), Annet Negesa (2012), and Dutee Chand (2014).",
"title": "Controversies"
},
{
"paragraph_id": 61,
"text": "Before the 2014 Asian Games, Indian athlete Dutee Chand was banned from competing internationally having been found to be in violation of the Hyperandrogenism Regulation. Following the denial of her appeal by the Court of Arbitration for Sport, the IOC suspended the policy for the 2016 Summer Olympics and 2018 Winter Olympics.",
"title": "Controversies"
},
{
"paragraph_id": 62,
"text": "Eight years after the 1998 Winter Olympics, a report ordered by the Nagano region's governor said the Japanese city provided millions of dollars in an \"illegitimate and excessive level of hospitality\" to IOC members, including US$4.4 million spent on entertainment. Earlier reports put the figure at approximately US$14 million. The precise figures are unknown: after the IOC asked that the entertainment expenditures not be made public Nagano destroyed its financial records.",
"title": "Controversies"
},
{
"paragraph_id": 63,
"text": "In 2010, the IOC was nominated for the Public Eye Awards. This award seeks to present \"shame-on-you-awards to the nastiest corporate players of the year\".",
"title": "Controversies"
},
{
"paragraph_id": 64,
"text": "Before the start of the 2012 Summer Olympic Games, the IOC decided not to hold a minute of silence to honour the 11 Israeli Olympians who were killed 40 years prior in the Munich massacre. Jacques Rogge, the then-IOC President, said it would be \"inappropriate\" to do so. Speaking of the decision, Israeli Olympian Shaul Ladany, who had survived the Munich Massacre, commented: \"I do not understand. I do not understand, and I do not accept it\".",
"title": "Controversies"
},
{
"paragraph_id": 65,
"text": "In February 2013, the IOC excluded wrestling from its core Olympic sports for the Summer Olympic programme for the 2020 Summer Olympics, because the sport did offer equal opportunities for men and women. This decision was attacked by the sporting community, given the sport's long traditions. This decision was later overturned, after a reassessment. Later, the sport was placed among the core Olympic sports, which it will hold until at least 2032.",
"title": "Controversies"
},
{
"paragraph_id": 66,
"text": "Media attention began growing in December 2014 when German broadcaster ARD reported on state-sponsored doping in Russia, comparing it to doping in East Germany. In November 2015, the World Anti-Doping Agency (WADA) published a report and the World Athletics (then known as the IAAF) suspended Russia indefinitely from world track and field events. The United Kingdom Anti-Doping agency later assisted WADA with testing in Russia. In June 2016, they reported that they were unable to fully carry out their work and noted intimidation by armed Federal Security Service (FSB) agents. After a Russian former lab director made allegations about the 2014 Winter Olympics in Sochi, WADA commissioned an independent investigation led by Richard McLaren. McLaren's investigation found corroborating evidence, concluding in a report published in July 2016 that the Ministry of Sport and the FSB had operated a \"state-directed failsafe system\" using a \"disappearing positive [test] methodology\" (DPM) from \"at least late 2011 to August 2015\".",
"title": "Controversies"
},
{
"paragraph_id": 67,
"text": "In response to these findings, WADA announced that RUSADA should be regarded as non-compliant with respect to the World Anti-Doping Code and recommended that Russia be banned from competing at the 2016 Summer Olympics. The IOC rejected the recommendation, stating that a separate decision would be made for each athlete by the relevant IF and the IOC, based on the athlete's individual circumstances. One day prior to the opening ceremony, 270 athletes were cleared to compete under the Russian flag, while 167 were removed because of doping. In contrast, the entire Kuwaiti team was banned from competing under their own flag (for a non-doping related matter).",
"title": "Controversies"
},
{
"paragraph_id": 68,
"text": "In contrast to the IOC, the IPC voted unanimously to ban the entire Russian team from the 2016 Summer Paralympics, having found evidence that the DPM was also in operation at the 2014 Winter Paralympics.",
"title": "Controversies"
},
{
"paragraph_id": 69,
"text": "On 5 December 2017, the IOC announced that the Russian Olympic Committee had been suspended effective immediately from the 2018 Winter Olympics. Athletes who had no previous drug violations and a consistent history of drug testing were allowed to compete under the Olympic Flag as an \"Olympic Athlete from Russia\" (OAR). Under the terms of the decree, Russian government officials were barred from the Games, and neither the country's flag nor anthem would be present. The Olympic Flag and Olympic Anthem would be used instead, and on 20 December 2017 the IOC proposed an alternate uniform logo.",
"title": "Controversies"
},
{
"paragraph_id": 70,
"text": "On 1 February 2018, the Court of Arbitration for Sport (CAS) found that the IOC provided insufficient evidence for 28 athletes, and overturned their IOC sanctions. For 11 other athletes, the CAS decided that there was sufficient evidence to uphold their Sochi sanctions, but reduced their lifetime bans to only the 2018 Winter Olympics. The IOC said in a statement that \"the result of the CAS decision does not mean that athletes from the group of 28 will be invited to the Games. Not being sanctioned does not automatically confer the privilege of an invitation\" and that \"this [case] may have a serious impact on the future fight against doping\". The IOC found it important to note that the CAS Secretary General \"insisted that the CAS decision does not mean that these 28 athletes are innocent\" and that they would consider an appeal against the court's decision. Later that month, the Russian Olympic Committee was reinstated by the IOC, despite numerous failed drug tests by Russian athletes in the 2018 Olympics. The Russian Anti-Doping Agency was re-certified in September, despite the Russian rejection of the McLaren Report.",
"title": "Controversies"
},
{
"paragraph_id": 71,
"text": "On 24 November 2018, the Taiwanese government held a referendum over a change in the naming of their National Olympic Committee, from \"Chinese Taipei,\" a name agreed to in 1981 by the People's Republic of China in the Nagoya Protocol, which denies the Republic of China's legitimacy, to simply \"Taiwan\", after the main island in the Free Area. In the immediate days prior to the referendum, the IOC and the PRC government, issued a threatening statement, suggesting that if the team underwent the name change, the IOC had the legal right to make a \"suspension of or forced withdrawal,\" of the team from the 2020 Summer Olympics. In response to the allegations of election interference, the IOC stated, \"The IOC does not interfere with local procedures and fully respects freedom of expression. However, to avoid any unnecessary expectations or speculations, the IOC wishes to reiterate that this matter is under its jurisdiction.\" Subsequently, with a significant PRC pressure, the referendum failed in Taiwan with 45% to 54%.",
"title": "Controversies"
},
{
"paragraph_id": 72,
"text": "In November 2021, the IOC was again criticized by Human Rights Watch (HRW) and others for its response to the 2021 disappearance of Peng Shuai, following her publishing of sexual assault allegations against a former Chinese vice premier, and high-ranking member of the Chinese Communist Party, Zhang Gaoli. The IOC's response was internationally criticized as complicit in assisting the Chinese government to silence Peng's sexual assault allegations. Zhang Gaoli previously led the Beijing bidding committee to host the 2022 Winter Olympics.",
"title": "Controversies"
},
{
"paragraph_id": 73,
"text": "In July 2020 (and reconfirmed by FIE public notice in September 2020 and in January 2021), by public written notice the FIE had replaced its previous handshake requirement with a \"salute\" by the opposing fencers, and written in its public notice that handshakes were \"suspended until further notice.\" Nevertheless, in July 2023 when Ukrainian four-time world fencing individual sabre champion Olga Kharlan was disqualified at the World Fencing Championships by the Fédération Internationale d'Escrime for not shaking the hand of her defeated Russian opponent, although Kharlan instead offered a tapping of blades in acknowledgement, Thomas Bach stepped in the next day. As President of the IOC, he sent a letter to Kharlan in which he expressed empathy for her, and wrote that in light of the situation she was guaranteed a spot in the 2024 Summer Olympics. He wrote further: \"as a fellow fencer, it is impossible for me to imagine how you feel at this moment. The war against your country, the suffering of the people in Ukraine, the uncertainty around your participation at the Fencing World Championships ... and then the events which unfolded yesterday – all this is a roller coaster of emotions and feelings. It is admirable how you are managing this incredibly difficult situation, and I would like to express my full support to you. Rest assured that the IOC will continue to stand in full solidarity with the Ukrainian athletes and the Olympic community of Ukraine.\"",
"title": "Controversies"
},
{
"paragraph_id": 74,
"text": "The Olympic Partner (TOP) sponsorship programme includes the following commercial sponsors of the Olympic Games.",
"title": "The Olympic Partner programme"
},
{
"paragraph_id": 75,
"text": "46°31′5″N 6°35′49″E / 46.51806°N 6.59694°E / 46.51806; 6.59694",
"title": "Further reading"
}
]
| The International Olympic Committee is a non-governmental sports organisation based in Lausanne, Switzerland. It was founded in 1894 by Pierre de Coubertin and Demetrios Vikelas, it is the authority responsible for organising the modern Olympic Games. The IOC is the governing body of the National Olympic Committees (NOCs) and of the worldwide Olympic Movement, the IOC's term for all entities and individuals involved in the Olympic Games. As of 2020, 206 NOCs officially were recognised by the IOC. Its president is Thomas Bach. | 2001-10-10T03:44:11Z | 2023-12-14T21:28:05Z | [
"Template:ESP",
"Template:Reflist",
"Template:Cite news",
"Template:Webarchive",
"Template:Coord",
"Template:International Sports Federations",
"Template:Use dmy dates",
"Template:FIN",
"Template:SRB",
"Template:CZE",
"Template:Cite book",
"Template:Citation",
"Template:International Olympic Committee",
"Template:FIJ",
"Template:RSA",
"Template:PNG",
"Template:ARG",
"Template:Cite journal",
"Template:Use British English",
"Template:BEL",
"Template:KOR",
"Template:Harvnb",
"Template:Authority control",
"Template:Portal bar",
"Template:AUS",
"Template:Cite web",
"Template:NOR",
"Template:Association of National Olympic Committees",
"Template:Olympic Games",
"Template:Redirect",
"Template:Cn",
"Template:See also",
"Template:Sfn",
"Template:JOR",
"Template:MON",
"Template:Cite press release",
"Template:Main",
"Template:ARU",
"Template:CRO",
"Template:GBR",
"Template:AUT",
"Template:Cbignore",
"Template:Dead link",
"Template:Infobox organization",
"Template:Further",
"Template:ITA",
"Template:TUR",
"Template:Lang-fr",
"Template:Toclimit",
"Template:Anchor",
"Template:GER",
"Template:SUI",
"Template:ZIM",
"Template:Mi",
"Template:PHI",
"Template:PUR",
"Template:BDI",
"Template:Commons category",
"Template:Short description",
"Template:SIN",
"Template:UKR",
"Template:ROU",
"Template:THA",
"Template:Olympic Games infobox",
"Template:CHN",
"Template:COL",
"Template:Cite magazine"
]
| https://en.wikipedia.org/wiki/International_Olympic_Committee |
15,150 | Integrated circuit | An integrated circuit (also known as an IC, a chip, or a microchip) is a set of electronic circuits on one small flat piece of semiconductor material, usually silicon. In an IC, a large numbers of miniaturized transistors and other electronic components are integrated together on the chip. This results in circuits that are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count.
The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers.
Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s.
ICs have three main advantages over discrete circuits: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated.
An integrated circuit is defined as:
A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce.
In strict usage integrated circuit refers to the single-piece circuit construction originally known as a monolithic integrated circuit, built on a single piece of silicon. In general usage, circuits not meeting this strict definition are sometimes referred to as ICs, which are constructed using many different technologies, e.g. 3D IC, 2.5D IC, MCM, thin-film transistors, thick-film technologies, or hybrid integrated circuits. The choice of terminology frequently appears in discussions related to whether Moore's Law is obsolete.
An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube from the 1920s. Unlike ICs, it was designed with the purpose of tax avoidance, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder.
Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a three-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.
Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other.
The monolithic integrated circuit chip was enabled by the inventions of the planar process by Jean Hoerni and p–n junction isolation by Kurt Lehovec. Hoerni's invention was built on Mohamed M. Atalla's work on surface passivation, as well as Fuller and Ditzenberger's work on the diffusion of boron and phosphorus impurities into silicon, Carl Frosch and Lincoln Derick's work on surface protection, and Chih-Tang Sah's work on diffusion masking by the oxide.
A precursor idea to the IC was to create small ceramic substrates (so-called micromodules), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated". The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit.
However, Kilby's invention was not a true monolithic integrated circuit chip since it had external gold-wire connections, which would have made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. More practical than Kilby's implementation, Noyce's chip was made of silicon, whereas Kilby's was made of germanium, and Noyce's was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni and included the critical on-chip aluminum interconnecting lines. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's.
NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965.
Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s.
Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL).
Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET (also known as the MOS transistor), which was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, made it possible to build high-density integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was pointed out by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959, and the MOSFET by Atalla and Kahng in 1959.
The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s.
Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip.
At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s.
Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from tens of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm, with up to 25 million transistors per mm.
The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems.
Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors.
As of 2018, the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as:
As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules, three-dimensional integrated circuits, package on package, High Bandwidth Memory and through-silicon vias with die stacking to increase performance and reduce size, without having to reduce the size of the transistors. Such techniques are collectively known as advanced packaging. Advanced packaging is mainly divided into 2.5D and 3D packaging. 2.5D describes approaches such as multi-chip modules while 3D describes approaches where dies are stacked in one way or another, such as package on package and high bandwidth memory. All approaches involve 2 or more dies in a single package. Alternatively, approaches such as 3D NAND stack multiple layers on a single die.
The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units.
Modern semiconductor chips have billions of components, and are far too complex to be designed by hand. Software tools to help the designer are essential. Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design, verify, and analyze entire semiconductor chips. Some of the latest EDA tools use artificial intelligence (AI) to help engineers save time and improve chip performance.
Integrated circuits can be broadly classified into analog, digital and mixed signal, consisting of analog and digital signaling on the same IC.
Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, use boolean algebra to process "one" and "zero" signals.
Among the most advanced integrated circuits are the microprocessors or "cores", used in personal computers, cell-phones, microwave ovens, etc. Several cores may be integrated together in a single IC or chip. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits.
In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do various LSI-type functions such as logic gates, adders and registers. Programmability comes in various forms – devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz.
Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), process continuous signals, and perform analog functions such as amplification, active filtering, demodulation, and mixing.
ICs can combine analog and digital circuits on a chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies.
Modern electronic component distributors often further sub-categorize integrated circuits:
The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure.
Semiconductor ICs are fabricated in a planar process which includes three key process steps – photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones, starting at the 22 nm node (Intel) or 16/14 nm nodes.
Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material.
Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar junction transistor devices.
A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.
Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to pads, usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices.
As of 2022, a fabrication facility (commonly known as a semiconductor fab) can cost over US$12 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products; this is known as Rock's law. Such a facility features:
ICs can be manufactured either in-house by integrated device manufacturers (IDMs) or using the foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia) only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services.
The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic, which is commonly cresol-formaldehyde-novolac. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.
In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors.
Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for a much higher pin count than other package types, were developed in the 1990s. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket but are much harder to replace in case of device failure.
Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. As of 2018, AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages.
Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself.
When multiple dies are put in one package, the result is a system in package, abbreviated SiP. A multi-chip module (MCM), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy.
Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics.
The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983.
The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout designs. The US Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits.
A diplomatic conference held at Washington, D.C., in 1989 adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The treaty is currently not in force, but was partially integrated into the TRIPS agreement.
There are several United States patents connected to the integrated circuit, which include patents by J.S. Kilby US3,138,743, US3,261,081, US3,434,015 and by R.F. Stewart US3,138,747.
National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co.
Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in further chip rights developments.
Australia passed the Circuit Layouts Act of 1989 as a sui generis form of chip protection. Korea passed the Act Concerning the Layout-Design of Semiconductor Integrated Circuits in 1992.
In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production.
The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI.
SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production.
The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers.
The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites.
The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).
MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips.
In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s.
Further development, driven by the same MOSFET scaling technology and economic factors, led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors per chip.
The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.
"Very-large-scale integration" (VLSI) is a development started with hundreds of thousands of transistors in the early 1980s, and, as of 2023, transistor counts continue to grow beyond 5.3 trillion transistors per chip.
Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved, making it practical to finish designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. The complexity and density of modern VLSI devices made it no longer feasible to check the masks or do the original design by hand. Instead, engineers use EDA tools to perform most functional verification work.
In 1986, one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989, and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.
To reflect further growth of the complexity, the term ULSI that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors.
Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.
A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures.
A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.
To allow identification during production, most silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling. | [
{
"paragraph_id": 0,
"text": "An integrated circuit (also known as an IC, a chip, or a microchip) is a set of electronic circuits on one small flat piece of semiconductor material, usually silicon. In an IC, a large numbers of miniaturized transistors and other electronic components are integrated together on the chip. This results in circuits that are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s.",
"title": ""
},
{
"paragraph_id": 3,
"text": "ICs have three main advantages over discrete circuits: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated.",
"title": ""
},
{
"paragraph_id": 4,
"text": "An integrated circuit is defined as:",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce.",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "In strict usage integrated circuit refers to the single-piece circuit construction originally known as a monolithic integrated circuit, built on a single piece of silicon. In general usage, circuits not meeting this strict definition are sometimes referred to as ICs, which are constructed using many different technologies, e.g. 3D IC, 2.5D IC, MCM, thin-film transistors, thick-film technologies, or hybrid integrated circuits. The choice of terminology frequently appears in discussions related to whether Moore's Law is obsolete.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube from the 1920s. Unlike ICs, it was designed with the purpose of tax avoidance, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a three-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The monolithic integrated circuit chip was enabled by the inventions of the planar process by Jean Hoerni and p–n junction isolation by Kurt Lehovec. Hoerni's invention was built on Mohamed M. Atalla's work on surface passivation, as well as Fuller and Ditzenberger's work on the diffusion of boron and phosphorus impurities into silicon, Carl Frosch and Lincoln Derick's work on surface protection, and Chih-Tang Sah's work on diffusion masking by the oxide.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "A precursor idea to the IC was to create small ceramic substrates (so-called micromodules), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as \"a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated\". The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "However, Kilby's invention was not a true monolithic integrated circuit chip since it had external gold-wire connections, which would have made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. More practical than Kilby's implementation, Noyce's chip was made of silicon, whereas Kilby's was made of germanium, and Noyce's was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni and included the critical on-chip aluminum interconnecting lines. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL).",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET (also known as the MOS transistor), which was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, made it possible to build high-density integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was pointed out by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959, and the MOSFET by Atalla and Kahng in 1959.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from tens of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm, with up to 25 million transistors per mm.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "As of 2018, the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as:",
"title": "History"
},
{
"paragraph_id": 25,
"text": "As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules, three-dimensional integrated circuits, package on package, High Bandwidth Memory and through-silicon vias with die stacking to increase performance and reduce size, without having to reduce the size of the transistors. Such techniques are collectively known as advanced packaging. Advanced packaging is mainly divided into 2.5D and 3D packaging. 2.5D describes approaches such as multi-chip modules while 3D describes approaches where dies are stacked in one way or another, such as package on package and high bandwidth memory. All approaches involve 2 or more dies in a single package. Alternatively, approaches such as 3D NAND stack multiple layers on a single die.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units.",
"title": "Design"
},
{
"paragraph_id": 27,
"text": "Modern semiconductor chips have billions of components, and are far too complex to be designed by hand. Software tools to help the designer are essential. Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design, verify, and analyze entire semiconductor chips. Some of the latest EDA tools use artificial intelligence (AI) to help engineers save time and improve chip performance.",
"title": "Design"
},
{
"paragraph_id": 28,
"text": "Integrated circuits can be broadly classified into analog, digital and mixed signal, consisting of analog and digital signaling on the same IC.",
"title": "Types"
},
{
"paragraph_id": 29,
"text": "Digital integrated circuits can contain billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, use boolean algebra to process \"one\" and \"zero\" signals.",
"title": "Types"
},
{
"paragraph_id": 30,
"text": "Among the most advanced integrated circuits are the microprocessors or \"cores\", used in personal computers, cell-phones, microwave ovens, etc. Several cores may be integrated together in a single IC or chip. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits.",
"title": "Types"
},
{
"paragraph_id": 31,
"text": "In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a chip to be programmed to do various LSI-type functions such as logic gates, adders and registers. Programmability comes in various forms – devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz.",
"title": "Types"
},
{
"paragraph_id": 32,
"text": "Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), process continuous signals, and perform analog functions such as amplification, active filtering, demodulation, and mixing.",
"title": "Types"
},
{
"paragraph_id": 33,
"text": "ICs can combine analog and digital circuits on a chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies.",
"title": "Types"
},
{
"paragraph_id": 34,
"text": "Modern electronic component distributors often further sub-categorize integrated circuits:",
"title": "Types"
},
{
"paragraph_id": 35,
"text": "The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a solid-state vacuum tube. Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure.",
"title": "Manufacturing"
},
{
"paragraph_id": 36,
"text": "Semiconductor ICs are fabricated in a planar process which includes three key process steps – photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones, starting at the 22 nm node (Intel) or 16/14 nm nodes.",
"title": "Manufacturing"
},
{
"paragraph_id": 37,
"text": "Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material.",
"title": "Manufacturing"
},
{
"paragraph_id": 38,
"text": "Since a CMOS device only draws current on the transition between logic states, CMOS devices consume much less current than bipolar junction transistor devices.",
"title": "Manufacturing"
},
{
"paragraph_id": 39,
"text": "A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to \"expose\" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process.",
"title": "Manufacturing"
},
{
"paragraph_id": 40,
"text": "Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a die. Each good die (plural dice, dies, or die) is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to pads, usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices.",
"title": "Manufacturing"
},
{
"paragraph_id": 41,
"text": "As of 2022, a fabrication facility (commonly known as a semiconductor fab) can cost over US$12 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products; this is known as Rock's law. Such a facility features:",
"title": "Manufacturing"
},
{
"paragraph_id": 42,
"text": "ICs can be manufactured either in-house by integrated device manufacturers (IDMs) or using the foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia) only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services.",
"title": "Manufacturing"
},
{
"paragraph_id": 43,
"text": "The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic, which is commonly cresol-formaldehyde-novolac. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has \"gull wing\" leads protruding from the two long sides and a lead spacing of 0.050 inches.",
"title": "Manufacturing"
},
{
"paragraph_id": 44,
"text": "In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors.",
"title": "Manufacturing"
},
{
"paragraph_id": 45,
"text": "Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for a much higher pin count than other package types, were developed in the 1990s. In an FCBGA package, the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket but are much harder to replace in case of device failure.",
"title": "Manufacturing"
},
{
"paragraph_id": 46,
"text": "Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. As of 2018, AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages.",
"title": "Manufacturing"
},
{
"paragraph_id": 47,
"text": "Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself.",
"title": "Manufacturing"
},
{
"paragraph_id": 48,
"text": "When multiple dies are put in one package, the result is a system in package, abbreviated SiP. A multi-chip module (MCM), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy.",
"title": "Manufacturing"
},
{
"paragraph_id": 49,
"text": "Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics.",
"title": "Manufacturing"
},
{
"paragraph_id": 50,
"text": "The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983.",
"title": "Manufacturing"
},
{
"paragraph_id": 51,
"text": "The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout designs. The US Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits.",
"title": "Intellectual property"
},
{
"paragraph_id": 52,
"text": "A diplomatic conference held at Washington, D.C., in 1989 adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The treaty is currently not in force, but was partially integrated into the TRIPS agreement.",
"title": "Intellectual property"
},
{
"paragraph_id": 53,
"text": "There are several United States patents connected to the integrated circuit, which include patents by J.S. Kilby US3,138,743, US3,261,081, US3,434,015 and by R.F. Stewart US3,138,747.",
"title": "Intellectual property"
},
{
"paragraph_id": 54,
"text": "National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co.",
"title": "Intellectual property"
},
{
"paragraph_id": 55,
"text": "Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in further chip rights developments.",
"title": "Intellectual property"
},
{
"paragraph_id": 56,
"text": "Australia passed the Circuit Layouts Act of 1989 as a sui generis form of chip protection. Korea passed the Act Concerning the Layout-Design of Semiconductor Integrated Circuits in 1992.",
"title": "Intellectual property"
},
{
"paragraph_id": 57,
"text": "In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production.",
"title": "Generations"
},
{
"paragraph_id": 58,
"text": "The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term \"large scale integration\" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms \"small-scale integration\" (SSI), \"medium-scale integration\" (MSI), \"very-large-scale integration\" (VLSI), and \"ultra-large-scale integration\" (ULSI). The early integrated circuits were SSI.",
"title": "Generations"
},
{
"paragraph_id": 59,
"text": "SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo Guidance Computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production.",
"title": "Generations"
},
{
"paragraph_id": 60,
"text": "The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers.",
"title": "Generations"
},
{
"paragraph_id": 61,
"text": "The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites.",
"title": "Generations"
},
{
"paragraph_id": 62,
"text": "The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called \"medium-scale integration\" (MSI).",
"title": "Generations"
},
{
"paragraph_id": 63,
"text": "MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips.",
"title": "Generations"
},
{
"paragraph_id": 64,
"text": "In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s.",
"title": "Generations"
},
{
"paragraph_id": 65,
"text": "Further development, driven by the same MOSFET scaling technology and economic factors, led to \"large-scale integration\" (LSI) by the mid-1970s, with tens of thousands of transistors per chip.",
"title": "Generations"
},
{
"paragraph_id": 66,
"text": "The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask.",
"title": "Generations"
},
{
"paragraph_id": 67,
"text": "Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.",
"title": "Generations"
},
{
"paragraph_id": 68,
"text": "\"Very-large-scale integration\" (VLSI) is a development started with hundreds of thousands of transistors in the early 1980s, and, as of 2023, transistor counts continue to grow beyond 5.3 trillion transistors per chip.",
"title": "Generations"
},
{
"paragraph_id": 69,
"text": "Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved, making it practical to finish designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. The complexity and density of modern VLSI devices made it no longer feasible to check the masks or do the original design by hand. Instead, engineers use EDA tools to perform most functional verification work.",
"title": "Generations"
},
{
"paragraph_id": 70,
"text": "In 1986, one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989, and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors.",
"title": "Generations"
},
{
"paragraph_id": 71,
"text": "To reflect further growth of the complexity, the term ULSI that stands for \"ultra-large-scale integration\" was proposed for chips of more than 1 million transistors.",
"title": "Generations"
},
{
"paragraph_id": 72,
"text": "Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single \"super-chip\". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.",
"title": "Generations"
},
{
"paragraph_id": 73,
"text": "A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures.",
"title": "Generations"
},
{
"paragraph_id": 74,
"text": "A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.",
"title": "Generations"
},
{
"paragraph_id": 75,
"text": "To allow identification during production, most silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling.",
"title": "Silicon labeling and graffiti"
}
]
| An integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material, usually silicon. In an IC, a large numbers of miniaturized transistors and other electronic components are integrated together on the chip. This results in circuits that are orders of magnitude smaller, faster, and less expensive than those constructed of discrete components, allowing a large transistor count. The IC's mass production capability, reliability, and building-block approach to integrated circuit design have ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones and other home appliances are now essential parts of the structure of modern societies, made possible by the small size and low cost of ICs such as modern computer processors and microcontrollers. Very-large-scale integration was made practical by technological advancements in semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more transistors on chips of the same size – a modern chip may have many billions of transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make the computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have three main advantages over discrete circuits: size, cost and performance. The size and cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high initial cost of designing them and the enormous capital cost of factory construction. This high initial cost means ICs are only commercially viable when high production volumes are anticipated. | 2001-10-12T12:52:48Z | 2023-12-30T22:37:38Z | [
"Template:Wafer bonding",
"Template:Authority control",
"Template:Cite news",
"Template:Electronic components",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite patent",
"Template:Commons category-inline",
"Template:Digital electronics",
"Template:Semiconductor packages",
"Template:Redirect",
"Template:As of",
"Template:Main",
"Template:US patent",
"Template:Portal",
"Template:Cite book",
"Template:Short description",
"Template:Use dmy dates",
"Template:More citations needed",
"Template:Notelist",
"Template:Cite magazine",
"Template:Processor technologies",
"Template:Efn",
"Template:See also",
"Template:Electronic systems",
"Template:Which",
"Template:Citation",
"Template:Abbr",
"Template:MOS Interface",
"Template:MOS Video/Sound",
"Template:See",
"Template:Anchor",
"Template:Cite conference",
"Template:Technology topics",
"Template:Computer science",
"Template:Rp",
"Template:Cite journal",
"Template:Snd",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Integrated_circuit |
15,154 | IBM 3270 | The IBM 3270 is a family of block oriented display and printer computer terminals introduced by IBM in 1971 and normally used to communicate with IBM mainframes. The 3270 was the successor to the IBM 2260 display terminal. Due to the text color on the original models, these terminals are informally known as green screen terminals. Unlike a character-oriented terminal, the 3270 minimizes the number of I/O interrupts required by transferring large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coaxial cable.
IBM no longer manufactures 3270 terminals, but the IBM 3270 protocol is still commonly used via TN3270 clients, 3270 terminal emulation or web interfaces to access mainframe-based applications, which are sometimes referred to as green screen applications.
The 3270 series was designed to connect with mainframe computers, often at a remote location, using the technology then available in the early 1970s. The main goal of the system was to maximize the number of terminals that could be used on a single mainframe. To do this, the 3270 was designed to minimize the amount of data transmitted, and minimize the frequency of interrupts to the mainframe. By ensuring the CPU is not interrupted at every keystroke, a 1970s-era IBM 3033 mainframe fitted with only 16 MB of main memory was able to support up to 17,500 3270 terminals under CICS.
Most 3270 devices are clustered, with one or more displays or printers connected to a control unit (the 3275 and 3276 included an integrated control unit). Originally devices were connected to the control unit over coaxial cable; later Token Ring, twisted pair, or Ethernet connections were available. A local control unit attaches directly to the channel of a nearby mainframe. A remote control unit is connected to a communications line by a modem. Remote 3270 controllers are frequently multi-dropped, with multiple control units on a line.
IBM 3270 devices are connected to a 3299 multiplexer or to the cluster controller, e.g., 3271, 3272, 3274, 3174, using RG-62, 93 ohm, coax cables in a point to point configuration with one dedicated cable per terminal. Data is sent with a bit rate of 2.3587 Mbit/s using a slightly modified differential Manchester encoding. Cable runs of up to 1,500 m (4,900 ft) are supported, although IBM documents routinely stated the maximum supported coax cable length was 2,000 ft (610 m). Originally devices were equipped with BNC connectors, which later was replaced with special so-called DPC – Dual Purpose Connectors supporting the IBM Shielded twisted pair cabling system without the need for so-called red baluns.
In a data stream, both text and control (or formatting functions) are interspersed allowing an entire screen to be painted as a single output operation. The concept of formatting in these devices allows the screen to be divided into fields (clusters of contiguous character cells) for which numerous field attributes, e.g., color, highlighting, character set, protection from modification, can be set. A field attribute occupies a physical location on the screen that also determines the beginning and end of a field. There are also character attributes associated with individual screen locations.
Using a technique known as read modified, a single transmission back to the mainframe can contain the changes from any number of formatted fields that have been modified, but without sending any unmodified fields or static data. This technique enhances the terminal throughput of the CPU, and minimizes the data transmitted. Some users familiar with character interrupt-driven terminal interfaces find this technique unusual. There is also a read buffer capability that transfers the entire content of the 3270-screen buffer including field attributes. This is mainly used for debugging purposes to preserve the application program screen contents while replacing it, temporarily, with debugging information.
Early 3270s offered three types of keyboards. The typewriter keyboard came in both a 66 key version, with no programmed function (PF) keys, and a 78 key version with twelve. Both versions had two Program Attention (PA) keys. The data entry keyboard had five PF keys and two PA keys. The operator console keyboard had twelve PF keys and two PA keys. Later 3270s had an Attention key, a Cursor Select key, a System Request key, twenty-four PF keys and three PA keys. There was also a TEST REQ key. When one of these keys is pressed, it will cause its control unit to generate an I/O interrupt to the host computer and present an Attention ID (AID) identifying which key was pressed. Application program functions such as termination, page-up, page-down, or help can be invoked by a single key press, thereby reducing the load on very busy processors.
A downside to this approach was that vi-like behavior, responding to individual keystrokes, was not possible. For the same reason, a port of Lotus 1-2-3 to mainframes with 3279 screens did not meet with success because its programmers were not able to properly adapt the spreadsheet's user interface to a screen at a time rather than character at a time device. But end-user responsiveness was arguably more predictable with 3270, something users appreciated.
Following its introduction the 3270 and compatibles were by far the most commonly used terminals on IBM System/370 and successor systems. IBM and third-party software that included an interactive component took for granted the presence of 3270 terminals and provided a set of ISPF panels and supporting programs.
Conversational Monitor System (CMS) in VM has support for the 3270 continuing to z/VM.
Time Sharing Option (TSO) in OS/360 and successors has line mode command line support and also has facilities for full screen applications, e.g., ISPF.
Device independent Display Operator Console Support (DIDOCS) in Multiple Console Support (MCS) for OS/360 and successors supports 3270 devices and, in fact, MCS in current versions of MVS no longer supports line mode, 2250 and 2260 devices.
The SPF and Program Development Facility (ISPF/PDF) editors for MVS and VM/SP (ISPF/PDF was available for VM, but little used) and the XEDIT editors for VM/SP through z/VM make extensive use of 3270 features.
Customer Information Control System (CICS) has support for 3270 panels. Indeed, from the early 1970s on, CICS applications were often written for the 3270.
Various versions of Wylbur have support for 3270, including support for full-screen applications.
McGill University's MUSIC/SP operating system provided support for 3270 terminals and applications, including a full-screen text editor, a menu system, and a PANEL facility to create 3270 full-screen applications.
The modified data tag is well suited to converting formatted, structured punched card input onto the 3270 display device. With the appropriate programming, any batch program that uses formatted, structured card input can be layered onto a 3270 terminal.
IBM's OfficeVision office productivity software enjoyed great success with 3270 interaction because of its design understanding. And for many years the PROFS calendar was the most commonly displayed screen on office terminals around the world.
A version of the WordPerfect word processor ported to System/370 was designed for the 3270 architecture.
3270 devices can be a part of an SNA – System Network Architecture network or non-SNA network. If the controllers are SNA connected, they appear to SNA as PU – Physical Unit type 2.0 (PU2.1 for APPN) nodes typically with LU – Logical Unit type 1, 2, and 3 devices connected. Local, channel attached, controllers are controlled by VTAM – Virtual Telecommunications Access Method. Remote controllers are controlled by the NCP – Network Control Program in the Front End Processor i.e. 3705, 3720, 3725, 3745, and VTAM.
One of the first groups to write and provide operating system support for the 3270 and its early predecessors was the University of Michigan, who created the Michigan Terminal System in order for the hardware to be useful outside of the manufacturer. MTS was the default OS at Michigan for many years, and was still used at Michigan well into the 1990s. Many manufacturers, such as GTE, Hewlett-Packard, Honeywell/Incoterm Div, Memorex, ITT Courier, McData, Harris, Alfaskop and Teletype/AT&T created 3270 compatible terminals, or adapted ASCII terminals such as the HP 2640 series to have a similar block-mode capability that would transmit a screen at a time, with some form validation capability. The industry distinguished between 'System compatible controllers' and 'Plug compatibility controllers', where 'System compatibility' meant that the 3rd party system was compatible with the 3270 data stream terminated in the unit, but not as 'Plug compatibility' equipment, also were compatible at the coax level thereby allowing IBM terminals to be connected to a 3rd party controller or vice versa. Modern applications are sometimes built upon legacy 3270 applications, using software utilities to capture (screen scraping) screens and transfer the data to web pages or GUI interfaces.
In the early 1990s a popular solution to link PCs with the mainframes was the Irma board, an expansion card that plugged into a PC and connected to the controller through a coaxial cable. 3270 simulators for IRMA and similar adapters typically provide file transfers between the PC and the mainframe using the same protocol as the IBM 3270 PC.
The IBM 3270 display terminal subsystem consists of displays, printers and controllers. Optional features for the 3275 and 3277 are the selector-pen, ASCII rather than EBCDIC character set, an audible alarm, and a keylock for the keyboard. A keyboard numeric lock was available and will lock the keyboard if the operator attempts to enter non-numeric data into a field defined as numeric. Later an Operator Identification Card Reader was added which could read information encoded on a magnetic stripe card.
Generally, 3277 models allow only upper-case input, except for the mixed EBCDIC/APL or text keyboards, which have lower case. Lower-case capability and dead keys were available as an RPQ (Request Price Quotation); these were added to the later 3278 & 3279 models.
A version of the IBM PC called the 3270 PC, released in October 1983, includes 3270 terminal emulation. Later, the 3270 PC/G (graphics), 3270 PC/GX (extended graphics), 3270 Personal Computer AT, 3270 PC AT/G (graphics) and 3270 PC AT/GX (extended graphics) followed.
There are two types of 3270 displays in respect to where the 3270 data stream terminates. For CUT (Control Unit Terminal) displays, the stream terminates in the display controller, the controller instructs the display to move the cursor, position a character, etc. EBCDIC is translated by the controller into '3270 Character Set', and keyboard scan-codes from the terminal, read by the controller through a poll, is translated by the controller into EBCDIC. For DFT (Distributed Function Terminal) type displays, most of the 3270 data stream is forwarded to the display by the controller. The display interprets the 3270 protocol itself.
In addition to passing the 3270 data stream directly to the terminal, allowing for features like EAB - Extended Attributes, Graphics, etc., DFT also enabled multi sessions (up to 5 simultaneous), featured in the 3290 and 3194 multisession displays. This feature was also widely used in 2nd generation 3270 terminal emulation software.
The MLT - Multiple Logical Terminals feature of the 3174 controller also enabled multiple sessions from a CUT type terminal.
The IBM 3279 was IBM's first color terminal. IBM initially announced four models, and later added a fifth model for use as a processor console.
The 3279 was introduced in 1979. The 3279 was widely used as an IBM mainframe terminal before PCs became commonly used for the purpose. It was part of the 3270 series, using the 3270 data stream. Terminals could be connected to a 3274 controller, either channel connected to an IBM mainframe or linked via an SDLC (Synchronous Data Link Control) link. In the Systems Network Architecture (SNA) protocol these terminals were logical unit type 2 (LU2). The basic models 2A and 3A used red, green for input fields, and blue and white for output fields. However, the models 2B and 3B supported seven colors, and when equipped with the optional Programmed Symbol Set feature had a loadable character set that could be used to show graphics.
The IBM 3279 with its graphics software support, Graphical Data Display Manager (GDDM), was designed at IBM's Hursley Development Laboratory, near Winchester, England.
The 3290 Information Panel a 17", amber monochrome plasma display unit announced March 8, 1983, capable of displaying in various modes, including four independent 3278 model 2 terminals, or a single 160×62 terminal; it also supports partitioning. The 3290 supports graphics through the use of programmed symbols. A 3290 application can divide its screen area up into as many as 16 separate explicit partitions (logical screens).
The 3290 is a Distributed Function Terminal (DFT) and requires that the controller do a downstream load (DSL) of microcode from floppy or hard disk.
The 3180 was a monochrome display, introduced on March 20, 1984, that the user could configure for several different basic and extended display modes; all of the basic modes have a primary screen size of 24x80. Modes 2 and 2+ have a secondary size of 24x80, 3 and 3+ have a secondary size of 32x80, 4 and 4+ have a secondary size of 43x80 and 5 and 5+ have a secondary size of 27x132. An application can override the primary and alternate screen sizes for the extended mode. The 3180 also supported a single explicit partition that could be reconfigured under application control.
The IBM 3191 Display Station is an economical monochrome CRT. Models A and B are 1920 characters 12-inch CRTs. Models D, E and L are 1920 or 2560 character 14-inch CRTs.
The IBM 3193 Display Station is a high-resolution, portrait-type, monochrome, 380mm (15 inch) CRT image display providing up to letter or A4 size document display capabilities in addition to alphanumeric data. Compressed images can be sent to the 3193 from a scanner and decompression is performed in the 3193. Image data compression is a technique to save transmission time and reduce storage requirements.
The IBM 3194 is a Display Station that features a 1.44MB 3.5" floppy drive and IND$FILE transfer.
Several third-party manufacturers produced 3270 displays besides IBM.
GTE manufactured the IS/7800 Video Display System, nominally compatible with IBM 3277 displays attached to a 3271 or 3272. An incompatibility with the RA buffer order broke the logon screen in VM/SE (SEPP).
Harris manufactured the 8000 Series Terminal Systems, compatible with IBM 3277 displays attached to a 3271 or 3272.
Harris later manufactured the 9100–9200 Information Processing Systems, which included
Informer Computer Terminals manufactured a special version of their model 270 terminal that was compatible with IBM 3270 and its associated coax port to connect to a 3x74.
Documentation for the following is available at
AT&T introduced the Dataspeed 40 terminal/controller, compatible with the IBM 3275, in 1980.
IBM had two different implementations for supporting graphics. The first was implemented in the optional Programmed Symbol Sets (PSS) of the 3278, 3279 and 3287, which became a standard feature on the later 3279-S3G, a.k.a. 3279G, and was based on piecing together graphics with on-the-fly custom-defined symbols downloaded to the terminal.
The second later implementation provided All Points Addressable (APA) graphics, a.k.a. Vector Graphics, allowing more efficient graphics than the older technique. The first terminal to support APA / Vector graphics was the 3179G terminal that later was replaced by first the 3192G and later the 3472G.
Both implementations are supported by IBM GDDM - Graphical Data Display Manager first released in 1979, and by SAS with their SAS/GRAPH software.
IBM 3279-S3G, a.k.a. 3279G, terminal, announced in 1979, was IBM's graphics replacement for the 3279-3B with PSS. The terminal supported 7 colors and the graphics were made up of Programmable Symbol sets loaded to the terminal by the graphical application GDDM - Graphical Data Display Manager using Write Structured Field command.
Programmable Symbols is an addition to the normal base character set consisting of Latin characters, numbers, etc. hardwired into the terminal. The 3279G supports six additional sets of symbols each supporting 190 symbols, resulting in a total of 1.140 programmable symbols. Three of the Programmable Symbols sets have three planes each enabling coloring (red, blue, green) the Programmable Symbols downloaded to those sets, thereby supporting a total of seven colors.
Each 'character' cell consists of a 9x12 or a 9x16 dot matrix depending on the screen model. In order to program a cell with a symbol 18 bytes of data is needed making the data load quite heavy in some instances when compared to classic text screens.
If one for example wishes to draw a hyperbola on the screen, the application must first compute the required Programmable Symbols to make up hyperbola and load them to the terminal. The next step is then for the application to paint the screen by addressing the screen cell position and select the appropriate symbol in one of the Programmable Symbols sets.
The 3279G could be ordered with Attribute Select Keyboard enabling the operator to select attributes, colors and Programmable Symbols sets, making that version of the terminal quite distinctive.
The IBM 3179G announced June 18, 1985, is an IBM mainframe computer terminal providing 80×24 or 80×32 characters, 16 colors, plus graphics and is the first terminal to support the APA graphics apart from the 3270 PC/G, 3270 PC/GX, PC AT/G and PC AT/GX.
3179-G terminals combine text and graphics as separate layers on the screen. Although the text and graphics appear combined on the screen, the text layer actually sits over the graphics layer. The text layer contains the usual 3270-style cells which display characters (letters, numbers, symbols, or invisible control characters). The graphics layer is an area of 720×384 pixels. All Points Addressable or vector graphics is used to paint each pixel in one of sixteen colors. As well as being separate layers on the screen, the text and graphics layers are sent to the display in separate data streams, making them completely independent.
The application i.e. GDDM sends the vector definitions to the 3179-G, and the work of activating the pixels that represent the picture (the vector-to-raster conversion) is done in the terminal itself. The datastream is related to the number of graphics primitives (lines, arcs, and so on) in the picture. Arcs are split into short vectors, that are sent to the 3179-G to be drawn. The 3179-G does not store graphic data, and so cannot offload any manipulation function from GDDM. In particular, with user control, each new viewing operation means that the data has to be regenerated and retransmitted.
The 3179G is a distributed function terminal (DFT) and requires a downstream load (DSL) to load its microcode from the cluster controller's floppy disk or hard drive.
The G10 model is a standard 122-key typewriter keyboard, while the G20 model offers APL on the same layout. Compatible with IBM System/370, IBM 4300 series, 303x, 308x, IBM 3090, and IBM 9370.
The IBM 3192G, announced in 1987 was the successor to 3179G. It featured 16 colors, and support for printers (i.e., IBM Proprinter) for local hardcopy with graphical support, or system printer, text only, implemented as an additional LU.
The IBM 3472G announced in 1989 was the successor to 3192G and featured five concurrent sessions, one of which could be graphics. Unlike the 3192-G, it needed no expansion unit to attach a mouse or color plotter, and it could also attach a tablet device for digitised input and a bar code reader.
Most IBM terminals, starting with the 3277, could be delivered with an APL keyboard, allowing the operator/programmer to enter APL symbolic instructions directly into the editor. In order to display APL symbols on the terminal, it had to be equipped with an APL character set in addition to the normal 3270-character set. The APL character set is addressed with a preceding Graphic Escape X'08' instruction.
With the advent of the graphic terminal 3179G, the APL character set was expandable to 138 characters, called APL2. The added characters were: Diamond, Quad Null, Iota Underbar, Epsilon Underbar, Left Tack, Right Tack, Equal Underbar, Squished Quad, Quad Slope, and Dieresis Dot. Later APL2 symbols were supported by 3191 Models D, E, L, the CUT version of 3192, and 3472.
Please note that IBM's version's of APL also is called APL2.
In 1984 announced IPDS – Intelligent Printer Data Stream for online printing of AFP - Advanced Function Presentation documents, using bidirectional communications between the application and the printer. IPDS support among others printing of text, fonts, images, graphics, and barcodes. The IBM 4224 is one of the IPDS capable dot matrix printers.
With the emergence of printers, including laser printers, from HP, Canon, and others, targeted the PC market, 3270 customers got an alternative to IBM 3270 printers by connecting this type of printers through printer protocol converters from manufactures like I-data, MPI Tech, Adacom, and others. The printer protocol converters basically emulate a 3287 type printer, and later extended to support IPDS.
The IBM 3482 terminal, announced in 1992, offered a printer port, which could be used for host addressable printing as well as local screen copy.
In the later versions of 3174 the Asynchronous Emulation Adapter (AEA), supporting async RS-232 character-based type terminals, was enhanced to support printers equipped with a serial interface.
On the 3274 and 3174, IBM used the term configuration support letter, sometimes followed by a release number, to designate a list of features together with the hardware and microcode needed to support them.
By 1994 the 3174 Establishment Controller supported features such as attachment to multiple hosts via Token Ring, Ethernet, or X.25 in addition to the standard channel attach or SDLC; terminal attachment via twisted pair, Token Ring or Ethernet in addition to co-ax; and TN3270. They also support attachment of asynchronous ASCII terminals, printers, and plotters alongside 3270 devices.
IBM introduced the 3274 controller family in 1977, replacing the 3271–2 product line.
Where the features of the 3271–2 was hardcoded, the 3274 was controlled by its microcode that was read from the 3274's built-in 8" floppy drive.
3274 models included 8, 12, 16, and 32 port remote controllers and 32-port local channel attached units. In total 16 different models were over time released to the market. The 3274-1A was an SNA physical Unit type 2.0 (PU2.0), required only a single address on the channel for all 32 devices and was not compatible with the 3272. The 3274-1B and 3274-1D were compatible with the 3272 and were referred to as local non-SNA models.
The 3274 controllers introduced a new generation of the coax protocol, named Category A, to differentiate them from the Category B coax devices, such as the 3277 terminal and the 3284 printer. The first Category A coax devices were the 3278 and the first color terminal, the IBM 3279 Color Display Station.
Enabling backward compatibility, it was possible to install coax boards, so-called 'panels', in groups of 4 or 8 supporting the now older Category B coax devices. A maximum of 16 Category B terminals could be supported, and only 8 if the controller were fully loaded with a maximum of 4 panels each supporting 8 Category A devices.
During its life span, the 3274 supported several features including:
IBM introduced the 3174 Subsystem Control Unit in 1986, replacing the 3274 product line.
The 3174 was designed to enhance the 3270 product line with many new connectivity options and features. Like the 3274, it was customizable, the main difference was that it used smaller (5.25-inch) diskettes than the 3274 (8-inch diskettes), and that the larger floor models had 10 slots for adapters, some of them were per default occupied by channel adapter/serial interface, coax adapter, etc. Unlike the 3274, any local models could be configured as either local SNA or local non-SNA, including PU2.1 (APPN).
The models included: 01L, 01R, 02R, 03R, 51R, 52R, 53R, 81R and 82R.
The 01L were local channel attached, the R models remotely connected, and the x3R Token Ring (upstream) connected. The 0xL/R models were floor units supporting up to 32 coax devices through the use of internal or external multiplexers (TMA/3299). The 5xR, models were shelf units with 9 coax ports, expandable to 16, by the connection of a 3299 multiplexer. The smallest desktop units, 8xR, had 4 coax ports expandable to 8, by the connection of a 3299 multiplexer.
In the 3174 controller line IBM also slightly altered the classical BNC coax connector by changing the BNC connector to DPC – Dual Purpose Connector. The DPC female connector was a few millimeters longer and with a built-in switch that detected if a normal BNC connector were connected or a newer DPC connector was connected, thereby changing the physical layer from 93 ohm unbalanced coax, to 150 ohm balanced twisted-pair, thereby directly supporting the IBM Cabling system without the need for a so-called red balun.
Configuration Support A was the first microcode offered with the 3174. It supported all the hardware modules present at the time, almost all the microcode features found in 3274 and introduced a number of new features including: Intelligent Printer Data Stream (IPDS), Multiple Logical Terminals, Country Extended Code Page (CECP), Response Time Monitor, and Token Ring configured as host interface.
Configuration Support S, strangely following release A, introduced that a local or remote controller could act as 3270 Token-Ring DSPU Gateway, supporting up to 80 Downstream PU's.
In 1989, IBM introduced a new range of 3174 models and changed the name from 3174 Subsystem Control Unit to 3174 Establishment Controller. The main new feature was support for an additional 32 coax port in floor models.
The models included: 11L, 11R, 12R, 13R, 61R, 62R, 63R, 91R, and 92R.
The new line of controllers came with Configuration Support B release 1, increased the number of supported DSPU on the Token-Ring gateway to 250 units, and introduced at the same time 'Group Polling' that offloaded the mainframe/VTAM polling requirement on the channel.
Configuration Support B release 2 to 5, enabled features like: Local Format Storage (CICS Screen Buffer), Type Ahead, Null/Space Processing, ESCON channel support.
In 1990–1991, a total of 7 more models were added: 21R, 21L, 12L, 22L, 22R, 23R, and 90R. The 12L offered ESCON fibreoptic channel attachment. The models with 2xx designation were equal to the 1xx models but repacked for rackmount and offered only 4 adapter slots. The 90R was not intended as a coax controller, it was positioned as a Token Ring 3270 DSPU gateway. However, it did have one coax port for configuring the unit, which with a 3299 multiplexer could be expanded to 8.
The line of controllers came with Configuration Support C to support ISDN, APPN and Peer Communication. The ISDN feature allowed downstream devices, typically PC's, to connect to the 3174 via the ISDN network. The APPN support enabled the 3174 to be a part of an APPN network, and the Peer Communication allowed coax attached PC's with 'Peer Communication Support' to access resources on the Token-Ring network attached to the 3174.
The subsequent releases 2 to 6 of Configuration Support C enables support for: Split screen, Copy from session to session, Calculator function, Access to AS/400 host and 5250 keyboard emulation, Numerous APPN enhancements, TCP/IP Telnet support that allowed 3270 CUT terminals to communicate with TCP/IP servers using Telnet, and at the same time in another screen to communicate with the mainframe using native 3270. TN3270 support where the 3174 could connect to a TN3270 host/gateway, eliminating SNA, but preserving the 3270 data stream. IP forwarding allowing bridging of LAN (Token-Ring or Ethernet) connected devices downstream to the 3174 to route IP traffic onto the Frame Relay WAN interface.
In 1993, three new models were added with the announcement of Ethernet Adapter (FC 3045). The models were: 14R, 24R, and 64R.
This was also IBM's final hardware announcement of 3174.
The floor models, and the rack-mountable units, could be expanded with a range of special 3174 adapters, that by 1993 included: Channel adapter, ESCON adapter, Serial (V.24/V.35) adapter, Concurrent Communication Adapter, Coax adapter, Fiber optic "coax" adapter, Async adapter, ISDN adapter, Token-Ring adapter, Ethernet adapter, and line encryption adapter.
In 1994, IBM incorporated the functions of RPQ 8Q0935 into Configuration Support-C release 3, including the TN3270 client.
The GTE IS/7800 Video Display Systems used one of two nominally IBM compatible controllers:
The Harris 8000 Series Terminal Systems used one of four controllers:
An alternative implementation of an establishment controller exists in form of OEC (Open Establishment Controller). It's a combination of an Arduino shield with a BNC connector and a Python program that runs on a POSIX system. OEC allows to connect a 3270 display to IBM mainframes via TN3270 or to other systems via VT100. Currently only CUT but not DFT displays are supported.
Memorex had two controllers for its 3277-compatible 1377; the 1371 for remote connection and the 1372 for local connection.
Later Memorex offered a series of controllers compatible with the IBM 3274 and 3174
IBM offered a device called 3299 that acted as a multiplexer between an accordingly configured 3274 controller, with the 9901 multiplexer feature, and up to eight displays/printers, thereby reducing the number of coax cables between the 3x74 controller and the displays/printers.
With the introduction of the 3174 controller internal or external multiplexers (3299) became mainstream as the 3174-1L controller was equipped with four multiplexed ports each supporting eight devices. The internal 3174 multiplexer card was named TMA – Terminal Multiplexer adapter 9176.
A number of vendors manufactured 3270 multiplexers before and alongside IBM including Fibronics and Adacom offering multiplexers that supported TTP – Telephone Twisted Pair as an alternative to coax, and fiber-optic links between the multiplexers.
In some instances, the multiplexer worked as an "expansion" unit on smaller remote controllers including the 3174-81R / 91R, where the 3299 expanded the number of coax ports from four to eight, or the 3174-51R / 61R, where the 3299 expanded the number of coax ports from eight to 16.
The IBM 3270 display terminal subsystem was designed and developed by IBM's Kingston, New York, laboratory (which later closed during IBM's difficult time in the mid-1990s). The printers were developed by the Endicott, New York, laboratory. As the subsystem expanded, the 3276 display-controller was developed by the Fujisawa laboratory, Japan, and later the Yamato laboratory; and the 3279 color display and 3287 color printer by the Hursley, UK, laboratory. The subsystem products were manufactured in Kingston (displays and controllers), Endicott (printers), and Greenock, Scotland, UK, (most products) and shipped to users in U.S. and worldwide. 3278 terminals continued to be manufactured in Hortolândia, near Campinas, Brazil as far as late 1980s, having its internals redesigned by a local engineering team using modern CMOS technology, while retaining its external look and feel.
Telnet 3270, or tn3270 describes both the process of sending and receiving 3270 data streams using the telnet protocol and the software that emulates a 3270 class terminal that communicates using that process. tn3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Telnet 3270 can be used for either terminal or print connections. Standard telnet clients cannot be used as a substitute for tn3270 clients, as they use fundamentally different techniques for exchanging data.
The 3270 displays are available with a variety of keyboards and character sets. The following table shows the 3275/3277/3284–3286 character set for US English EBCDIC (optional characters were available for US ASCII, and UK, French, German, and Italian EBCDIC).
On the 3275 and 3277 terminals without the a text feature, lower case characters display as uppercase. NL, EM, DUP, and FM control characters display and print as 5, 9, *, and ; characters, respectively, except by the printer when WCC or CCC bits 2 and 3 = '00'b, in which case NL and EM serve their control function and do not print.
Data sent to the 3270 consist of commands, a Copy Control Character (CCC) or Write Control Character (WCC) if appropriate, a device address for copy, orders, character data and structured fields. Commands instruct the 3270 control unit to perform some action on a specified device, such as a read or write. Orders are sent as part of the data stream to control the format of the device buffer. Structured fields are to convey additional control functions and data to or from the terminal.
On a local non-SNA controller, the command is a CCW opcode rather than the first byte of the outbound display stream; on all other controllers, the command is the first byte of the display stream, exclusive of protocol headers.
The following table includes datastream commands and CCW opcodes for local non-SNA controllers; it does not include CCW opcodes for local SNA controllers.
The data sent by Write or Erase/Write consists of the command code itself followed by a Write Control Character (WCC) optionally followed by a buffer containing orders or data (or both). The WCC controls the operation of the device. Bits may start printer operation and specify a print format. Other bit settings will sound the audible alarm if installed, unlock the keyboard to allow operator entry, or reset all the Modified Data Tags in the device buffer.
Orders consist of the order code byte followed by zero to three bytes of variable information.
The 3270 has three kinds of attributes:
The original 3277 and 3275 displays used an 8-bit field attribute byte of which five bits were used.
Later models include base color: "Base color (four colors) can be produced on color displays and color printers from current 3270 application programs by use of combinations of the field intensify and field protection attribute bits. For more information on color, refer to IBM 3270 Information System: Color and Programmed Symbols, GA33-3056."
The 3278 and 3279 and later models used extended attributes to add support for seven colors, blinking, reverse video, underscoring, field outlining, field validation, and programmed symbols.
The 3278 and 3279 and later models allowed attributes on individual characters in a field to override the corresponding field attributes.
This allowed programs (such as the LEXX text editor) to assign any font (including the programmable fonts), colour, etc. to any character on the screen.
3270 displays and printers have a buffer containing one byte for every screen position. For example, a 3277 model 2 featured a screen size of 24 rows of 80 columns for a buffer size of 1920 bytes. Bytes are addressed from zero to the screen size minus one, in this example 1919. "There is a fixed relationship between each ... buffer storage location and its position on the display screen." Most orders start operation at the "current" buffer address, and executing an order or writing data will update this address. The buffer address can be set directly using the Set Buffer Address (SBA) order, often followed by Start Field or Start Field Extended. For a device with a 1920 character display a twelve bit address is sufficient. Later 3270s with larger screen sizes use fourteen or sixteen bits.
Addresses are encoded within orders in two bytes. For twelve bit addresses the high order two bits of each byte are set to form valid EBCDIC (or ASCII) characters. For example, address 0 is coded as X'4040', or space-space, address 1919 is coded as X'5D7F', or '"'. Programmers hand-coding panels usually keep the table of addresses from the 3270 Component Description or the 3270 Reference Card handy. For fourteen and sixteen-bit address, the address uses contiguous bits in two bytes.
The following data stream writes an attribute in row 24, column 1, writes the (protected) characters '> ' in row 24, columns 2 and 3, and creates an unprotected field on row 24 from columns 5-79. Because the buffer wraps around an attribute is placed on row 24, column 80 to terminate the input field. This data stream would normally be written using an Erase/Write command which would set undefined positions on the screen to '00'x. Values are given in hexadecimal.
Most 3270 terminals newer than the 3275, 3277, 3284 and 3286 support an extended data stream (EDS) that allows many new capabilities, including: | [
{
"paragraph_id": 0,
"text": "The IBM 3270 is a family of block oriented display and printer computer terminals introduced by IBM in 1971 and normally used to communicate with IBM mainframes. The 3270 was the successor to the IBM 2260 display terminal. Due to the text color on the original models, these terminals are informally known as green screen terminals. Unlike a character-oriented terminal, the 3270 minimizes the number of I/O interrupts required by transferring large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coaxial cable.",
"title": ""
},
{
"paragraph_id": 1,
"text": "IBM no longer manufactures 3270 terminals, but the IBM 3270 protocol is still commonly used via TN3270 clients, 3270 terminal emulation or web interfaces to access mainframe-based applications, which are sometimes referred to as green screen applications.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The 3270 series was designed to connect with mainframe computers, often at a remote location, using the technology then available in the early 1970s. The main goal of the system was to maximize the number of terminals that could be used on a single mainframe. To do this, the 3270 was designed to minimize the amount of data transmitted, and minimize the frequency of interrupts to the mainframe. By ensuring the CPU is not interrupted at every keystroke, a 1970s-era IBM 3033 mainframe fitted with only 16 MB of main memory was able to support up to 17,500 3270 terminals under CICS.",
"title": "Principles"
},
{
"paragraph_id": 3,
"text": "Most 3270 devices are clustered, with one or more displays or printers connected to a control unit (the 3275 and 3276 included an integrated control unit). Originally devices were connected to the control unit over coaxial cable; later Token Ring, twisted pair, or Ethernet connections were available. A local control unit attaches directly to the channel of a nearby mainframe. A remote control unit is connected to a communications line by a modem. Remote 3270 controllers are frequently multi-dropped, with multiple control units on a line.",
"title": "Principles"
},
{
"paragraph_id": 4,
"text": "IBM 3270 devices are connected to a 3299 multiplexer or to the cluster controller, e.g., 3271, 3272, 3274, 3174, using RG-62, 93 ohm, coax cables in a point to point configuration with one dedicated cable per terminal. Data is sent with a bit rate of 2.3587 Mbit/s using a slightly modified differential Manchester encoding. Cable runs of up to 1,500 m (4,900 ft) are supported, although IBM documents routinely stated the maximum supported coax cable length was 2,000 ft (610 m). Originally devices were equipped with BNC connectors, which later was replaced with special so-called DPC – Dual Purpose Connectors supporting the IBM Shielded twisted pair cabling system without the need for so-called red baluns.",
"title": "Principles"
},
{
"paragraph_id": 5,
"text": "In a data stream, both text and control (or formatting functions) are interspersed allowing an entire screen to be painted as a single output operation. The concept of formatting in these devices allows the screen to be divided into fields (clusters of contiguous character cells) for which numerous field attributes, e.g., color, highlighting, character set, protection from modification, can be set. A field attribute occupies a physical location on the screen that also determines the beginning and end of a field. There are also character attributes associated with individual screen locations.",
"title": "Principles"
},
{
"paragraph_id": 6,
"text": "Using a technique known as read modified, a single transmission back to the mainframe can contain the changes from any number of formatted fields that have been modified, but without sending any unmodified fields or static data. This technique enhances the terminal throughput of the CPU, and minimizes the data transmitted. Some users familiar with character interrupt-driven terminal interfaces find this technique unusual. There is also a read buffer capability that transfers the entire content of the 3270-screen buffer including field attributes. This is mainly used for debugging purposes to preserve the application program screen contents while replacing it, temporarily, with debugging information.",
"title": "Principles"
},
{
"paragraph_id": 7,
"text": "Early 3270s offered three types of keyboards. The typewriter keyboard came in both a 66 key version, with no programmed function (PF) keys, and a 78 key version with twelve. Both versions had two Program Attention (PA) keys. The data entry keyboard had five PF keys and two PA keys. The operator console keyboard had twelve PF keys and two PA keys. Later 3270s had an Attention key, a Cursor Select key, a System Request key, twenty-four PF keys and three PA keys. There was also a TEST REQ key. When one of these keys is pressed, it will cause its control unit to generate an I/O interrupt to the host computer and present an Attention ID (AID) identifying which key was pressed. Application program functions such as termination, page-up, page-down, or help can be invoked by a single key press, thereby reducing the load on very busy processors.",
"title": "Principles"
},
{
"paragraph_id": 8,
"text": "A downside to this approach was that vi-like behavior, responding to individual keystrokes, was not possible. For the same reason, a port of Lotus 1-2-3 to mainframes with 3279 screens did not meet with success because its programmers were not able to properly adapt the spreadsheet's user interface to a screen at a time rather than character at a time device. But end-user responsiveness was arguably more predictable with 3270, something users appreciated.",
"title": "Principles"
},
{
"paragraph_id": 9,
"text": "Following its introduction the 3270 and compatibles were by far the most commonly used terminals on IBM System/370 and successor systems. IBM and third-party software that included an interactive component took for granted the presence of 3270 terminals and provided a set of ISPF panels and supporting programs.",
"title": "Applications"
},
{
"paragraph_id": 10,
"text": "Conversational Monitor System (CMS) in VM has support for the 3270 continuing to z/VM.",
"title": "Applications"
},
{
"paragraph_id": 11,
"text": "Time Sharing Option (TSO) in OS/360 and successors has line mode command line support and also has facilities for full screen applications, e.g., ISPF.",
"title": "Applications"
},
{
"paragraph_id": 12,
"text": "Device independent Display Operator Console Support (DIDOCS) in Multiple Console Support (MCS) for OS/360 and successors supports 3270 devices and, in fact, MCS in current versions of MVS no longer supports line mode, 2250 and 2260 devices.",
"title": "Applications"
},
{
"paragraph_id": 13,
"text": "The SPF and Program Development Facility (ISPF/PDF) editors for MVS and VM/SP (ISPF/PDF was available for VM, but little used) and the XEDIT editors for VM/SP through z/VM make extensive use of 3270 features.",
"title": "Applications"
},
{
"paragraph_id": 14,
"text": "Customer Information Control System (CICS) has support for 3270 panels. Indeed, from the early 1970s on, CICS applications were often written for the 3270.",
"title": "Applications"
},
{
"paragraph_id": 15,
"text": "Various versions of Wylbur have support for 3270, including support for full-screen applications.",
"title": "Applications"
},
{
"paragraph_id": 16,
"text": "McGill University's MUSIC/SP operating system provided support for 3270 terminals and applications, including a full-screen text editor, a menu system, and a PANEL facility to create 3270 full-screen applications.",
"title": "Applications"
},
{
"paragraph_id": 17,
"text": "The modified data tag is well suited to converting formatted, structured punched card input onto the 3270 display device. With the appropriate programming, any batch program that uses formatted, structured card input can be layered onto a 3270 terminal.",
"title": "Applications"
},
{
"paragraph_id": 18,
"text": "IBM's OfficeVision office productivity software enjoyed great success with 3270 interaction because of its design understanding. And for many years the PROFS calendar was the most commonly displayed screen on office terminals around the world.",
"title": "Applications"
},
{
"paragraph_id": 19,
"text": "A version of the WordPerfect word processor ported to System/370 was designed for the 3270 architecture.",
"title": "Applications"
},
{
"paragraph_id": 20,
"text": "3270 devices can be a part of an SNA – System Network Architecture network or non-SNA network. If the controllers are SNA connected, they appear to SNA as PU – Physical Unit type 2.0 (PU2.1 for APPN) nodes typically with LU – Logical Unit type 1, 2, and 3 devices connected. Local, channel attached, controllers are controlled by VTAM – Virtual Telecommunications Access Method. Remote controllers are controlled by the NCP – Network Control Program in the Front End Processor i.e. 3705, 3720, 3725, 3745, and VTAM.",
"title": "SNA"
},
{
"paragraph_id": 21,
"text": "One of the first groups to write and provide operating system support for the 3270 and its early predecessors was the University of Michigan, who created the Michigan Terminal System in order for the hardware to be useful outside of the manufacturer. MTS was the default OS at Michigan for many years, and was still used at Michigan well into the 1990s. Many manufacturers, such as GTE, Hewlett-Packard, Honeywell/Incoterm Div, Memorex, ITT Courier, McData, Harris, Alfaskop and Teletype/AT&T created 3270 compatible terminals, or adapted ASCII terminals such as the HP 2640 series to have a similar block-mode capability that would transmit a screen at a time, with some form validation capability. The industry distinguished between 'System compatible controllers' and 'Plug compatibility controllers', where 'System compatibility' meant that the 3rd party system was compatible with the 3270 data stream terminated in the unit, but not as 'Plug compatibility' equipment, also were compatible at the coax level thereby allowing IBM terminals to be connected to a 3rd party controller or vice versa. Modern applications are sometimes built upon legacy 3270 applications, using software utilities to capture (screen scraping) screens and transfer the data to web pages or GUI interfaces.",
"title": "Third parties"
},
{
"paragraph_id": 22,
"text": "In the early 1990s a popular solution to link PCs with the mainframes was the Irma board, an expansion card that plugged into a PC and connected to the controller through a coaxial cable. 3270 simulators for IRMA and similar adapters typically provide file transfers between the PC and the mainframe using the same protocol as the IBM 3270 PC.",
"title": "Third parties"
},
{
"paragraph_id": 23,
"text": "The IBM 3270 display terminal subsystem consists of displays, printers and controllers. Optional features for the 3275 and 3277 are the selector-pen, ASCII rather than EBCDIC character set, an audible alarm, and a keylock for the keyboard. A keyboard numeric lock was available and will lock the keyboard if the operator attempts to enter non-numeric data into a field defined as numeric. Later an Operator Identification Card Reader was added which could read information encoded on a magnetic stripe card.",
"title": "Models"
},
{
"paragraph_id": 24,
"text": "Generally, 3277 models allow only upper-case input, except for the mixed EBCDIC/APL or text keyboards, which have lower case. Lower-case capability and dead keys were available as an RPQ (Request Price Quotation); these were added to the later 3278 & 3279 models.",
"title": "Models"
},
{
"paragraph_id": 25,
"text": "A version of the IBM PC called the 3270 PC, released in October 1983, includes 3270 terminal emulation. Later, the 3270 PC/G (graphics), 3270 PC/GX (extended graphics), 3270 Personal Computer AT, 3270 PC AT/G (graphics) and 3270 PC AT/GX (extended graphics) followed.",
"title": "Models"
},
{
"paragraph_id": 26,
"text": "There are two types of 3270 displays in respect to where the 3270 data stream terminates. For CUT (Control Unit Terminal) displays, the stream terminates in the display controller, the controller instructs the display to move the cursor, position a character, etc. EBCDIC is translated by the controller into '3270 Character Set', and keyboard scan-codes from the terminal, read by the controller through a poll, is translated by the controller into EBCDIC. For DFT (Distributed Function Terminal) type displays, most of the 3270 data stream is forwarded to the display by the controller. The display interprets the 3270 protocol itself.",
"title": "Models"
},
{
"paragraph_id": 27,
"text": "In addition to passing the 3270 data stream directly to the terminal, allowing for features like EAB - Extended Attributes, Graphics, etc., DFT also enabled multi sessions (up to 5 simultaneous), featured in the 3290 and 3194 multisession displays. This feature was also widely used in 2nd generation 3270 terminal emulation software.",
"title": "Models"
},
{
"paragraph_id": 28,
"text": "The MLT - Multiple Logical Terminals feature of the 3174 controller also enabled multiple sessions from a CUT type terminal.",
"title": "Models"
},
{
"paragraph_id": 29,
"text": "The IBM 3279 was IBM's first color terminal. IBM initially announced four models, and later added a fifth model for use as a processor console.",
"title": "Models"
},
{
"paragraph_id": 30,
"text": "The 3279 was introduced in 1979. The 3279 was widely used as an IBM mainframe terminal before PCs became commonly used for the purpose. It was part of the 3270 series, using the 3270 data stream. Terminals could be connected to a 3274 controller, either channel connected to an IBM mainframe or linked via an SDLC (Synchronous Data Link Control) link. In the Systems Network Architecture (SNA) protocol these terminals were logical unit type 2 (LU2). The basic models 2A and 3A used red, green for input fields, and blue and white for output fields. However, the models 2B and 3B supported seven colors, and when equipped with the optional Programmed Symbol Set feature had a loadable character set that could be used to show graphics.",
"title": "Models"
},
{
"paragraph_id": 31,
"text": "The IBM 3279 with its graphics software support, Graphical Data Display Manager (GDDM), was designed at IBM's Hursley Development Laboratory, near Winchester, England.",
"title": "Models"
},
{
"paragraph_id": 32,
"text": "The 3290 Information Panel a 17\", amber monochrome plasma display unit announced March 8, 1983, capable of displaying in various modes, including four independent 3278 model 2 terminals, or a single 160×62 terminal; it also supports partitioning. The 3290 supports graphics through the use of programmed symbols. A 3290 application can divide its screen area up into as many as 16 separate explicit partitions (logical screens).",
"title": "Models"
},
{
"paragraph_id": 33,
"text": "The 3290 is a Distributed Function Terminal (DFT) and requires that the controller do a downstream load (DSL) of microcode from floppy or hard disk.",
"title": "Models"
},
{
"paragraph_id": 34,
"text": "The 3180 was a monochrome display, introduced on March 20, 1984, that the user could configure for several different basic and extended display modes; all of the basic modes have a primary screen size of 24x80. Modes 2 and 2+ have a secondary size of 24x80, 3 and 3+ have a secondary size of 32x80, 4 and 4+ have a secondary size of 43x80 and 5 and 5+ have a secondary size of 27x132. An application can override the primary and alternate screen sizes for the extended mode. The 3180 also supported a single explicit partition that could be reconfigured under application control.",
"title": "Models"
},
{
"paragraph_id": 35,
"text": "The IBM 3191 Display Station is an economical monochrome CRT. Models A and B are 1920 characters 12-inch CRTs. Models D, E and L are 1920 or 2560 character 14-inch CRTs.",
"title": "Models"
},
{
"paragraph_id": 36,
"text": "The IBM 3193 Display Station is a high-resolution, portrait-type, monochrome, 380mm (15 inch) CRT image display providing up to letter or A4 size document display capabilities in addition to alphanumeric data. Compressed images can be sent to the 3193 from a scanner and decompression is performed in the 3193. Image data compression is a technique to save transmission time and reduce storage requirements.",
"title": "Models"
},
{
"paragraph_id": 37,
"text": "The IBM 3194 is a Display Station that features a 1.44MB 3.5\" floppy drive and IND$FILE transfer.",
"title": "Models"
},
{
"paragraph_id": 38,
"text": "Several third-party manufacturers produced 3270 displays besides IBM.",
"title": "Models"
},
{
"paragraph_id": 39,
"text": "GTE manufactured the IS/7800 Video Display System, nominally compatible with IBM 3277 displays attached to a 3271 or 3272. An incompatibility with the RA buffer order broke the logon screen in VM/SE (SEPP).",
"title": "Models"
},
{
"paragraph_id": 40,
"text": "Harris manufactured the 8000 Series Terminal Systems, compatible with IBM 3277 displays attached to a 3271 or 3272.",
"title": "Models"
},
{
"paragraph_id": 41,
"text": "Harris later manufactured the 9100–9200 Information Processing Systems, which included",
"title": "Models"
},
{
"paragraph_id": 42,
"text": "Informer Computer Terminals manufactured a special version of their model 270 terminal that was compatible with IBM 3270 and its associated coax port to connect to a 3x74.",
"title": "Models"
},
{
"paragraph_id": 43,
"text": "Documentation for the following is available at",
"title": "Models"
},
{
"paragraph_id": 44,
"text": "AT&T introduced the Dataspeed 40 terminal/controller, compatible with the IBM 3275, in 1980.",
"title": "Models"
},
{
"paragraph_id": 45,
"text": "IBM had two different implementations for supporting graphics. The first was implemented in the optional Programmed Symbol Sets (PSS) of the 3278, 3279 and 3287, which became a standard feature on the later 3279-S3G, a.k.a. 3279G, and was based on piecing together graphics with on-the-fly custom-defined symbols downloaded to the terminal.",
"title": "Models"
},
{
"paragraph_id": 46,
"text": "The second later implementation provided All Points Addressable (APA) graphics, a.k.a. Vector Graphics, allowing more efficient graphics than the older technique. The first terminal to support APA / Vector graphics was the 3179G terminal that later was replaced by first the 3192G and later the 3472G.",
"title": "Models"
},
{
"paragraph_id": 47,
"text": "Both implementations are supported by IBM GDDM - Graphical Data Display Manager first released in 1979, and by SAS with their SAS/GRAPH software.",
"title": "Models"
},
{
"paragraph_id": 48,
"text": "IBM 3279-S3G, a.k.a. 3279G, terminal, announced in 1979, was IBM's graphics replacement for the 3279-3B with PSS. The terminal supported 7 colors and the graphics were made up of Programmable Symbol sets loaded to the terminal by the graphical application GDDM - Graphical Data Display Manager using Write Structured Field command.",
"title": "Models"
},
{
"paragraph_id": 49,
"text": "Programmable Symbols is an addition to the normal base character set consisting of Latin characters, numbers, etc. hardwired into the terminal. The 3279G supports six additional sets of symbols each supporting 190 symbols, resulting in a total of 1.140 programmable symbols. Three of the Programmable Symbols sets have three planes each enabling coloring (red, blue, green) the Programmable Symbols downloaded to those sets, thereby supporting a total of seven colors.",
"title": "Models"
},
{
"paragraph_id": 50,
"text": "Each 'character' cell consists of a 9x12 or a 9x16 dot matrix depending on the screen model. In order to program a cell with a symbol 18 bytes of data is needed making the data load quite heavy in some instances when compared to classic text screens.",
"title": "Models"
},
{
"paragraph_id": 51,
"text": "If one for example wishes to draw a hyperbola on the screen, the application must first compute the required Programmable Symbols to make up hyperbola and load them to the terminal. The next step is then for the application to paint the screen by addressing the screen cell position and select the appropriate symbol in one of the Programmable Symbols sets.",
"title": "Models"
},
{
"paragraph_id": 52,
"text": "The 3279G could be ordered with Attribute Select Keyboard enabling the operator to select attributes, colors and Programmable Symbols sets, making that version of the terminal quite distinctive.",
"title": "Models"
},
{
"paragraph_id": 53,
"text": "The IBM 3179G announced June 18, 1985, is an IBM mainframe computer terminal providing 80×24 or 80×32 characters, 16 colors, plus graphics and is the first terminal to support the APA graphics apart from the 3270 PC/G, 3270 PC/GX, PC AT/G and PC AT/GX.",
"title": "Models"
},
{
"paragraph_id": 54,
"text": "3179-G terminals combine text and graphics as separate layers on the screen. Although the text and graphics appear combined on the screen, the text layer actually sits over the graphics layer. The text layer contains the usual 3270-style cells which display characters (letters, numbers, symbols, or invisible control characters). The graphics layer is an area of 720×384 pixels. All Points Addressable or vector graphics is used to paint each pixel in one of sixteen colors. As well as being separate layers on the screen, the text and graphics layers are sent to the display in separate data streams, making them completely independent.",
"title": "Models"
},
{
"paragraph_id": 55,
"text": "The application i.e. GDDM sends the vector definitions to the 3179-G, and the work of activating the pixels that represent the picture (the vector-to-raster conversion) is done in the terminal itself. The datastream is related to the number of graphics primitives (lines, arcs, and so on) in the picture. Arcs are split into short vectors, that are sent to the 3179-G to be drawn. The 3179-G does not store graphic data, and so cannot offload any manipulation function from GDDM. In particular, with user control, each new viewing operation means that the data has to be regenerated and retransmitted.",
"title": "Models"
},
{
"paragraph_id": 56,
"text": "The 3179G is a distributed function terminal (DFT) and requires a downstream load (DSL) to load its microcode from the cluster controller's floppy disk or hard drive.",
"title": "Models"
},
{
"paragraph_id": 57,
"text": "The G10 model is a standard 122-key typewriter keyboard, while the G20 model offers APL on the same layout. Compatible with IBM System/370, IBM 4300 series, 303x, 308x, IBM 3090, and IBM 9370.",
"title": "Models"
},
{
"paragraph_id": 58,
"text": "The IBM 3192G, announced in 1987 was the successor to 3179G. It featured 16 colors, and support for printers (i.e., IBM Proprinter) for local hardcopy with graphical support, or system printer, text only, implemented as an additional LU.",
"title": "Models"
},
{
"paragraph_id": 59,
"text": "The IBM 3472G announced in 1989 was the successor to 3192G and featured five concurrent sessions, one of which could be graphics. Unlike the 3192-G, it needed no expansion unit to attach a mouse or color plotter, and it could also attach a tablet device for digitised input and a bar code reader.",
"title": "Models"
},
{
"paragraph_id": 60,
"text": "Most IBM terminals, starting with the 3277, could be delivered with an APL keyboard, allowing the operator/programmer to enter APL symbolic instructions directly into the editor. In order to display APL symbols on the terminal, it had to be equipped with an APL character set in addition to the normal 3270-character set. The APL character set is addressed with a preceding Graphic Escape X'08' instruction.",
"title": "Models"
},
{
"paragraph_id": 61,
"text": "With the advent of the graphic terminal 3179G, the APL character set was expandable to 138 characters, called APL2. The added characters were: Diamond, Quad Null, Iota Underbar, Epsilon Underbar, Left Tack, Right Tack, Equal Underbar, Squished Quad, Quad Slope, and Dieresis Dot. Later APL2 symbols were supported by 3191 Models D, E, L, the CUT version of 3192, and 3472.",
"title": "Models"
},
{
"paragraph_id": 62,
"text": "Please note that IBM's version's of APL also is called APL2.",
"title": "Models"
},
{
"paragraph_id": 63,
"text": "In 1984 announced IPDS – Intelligent Printer Data Stream for online printing of AFP - Advanced Function Presentation documents, using bidirectional communications between the application and the printer. IPDS support among others printing of text, fonts, images, graphics, and barcodes. The IBM 4224 is one of the IPDS capable dot matrix printers.",
"title": "Models"
},
{
"paragraph_id": 64,
"text": "With the emergence of printers, including laser printers, from HP, Canon, and others, targeted the PC market, 3270 customers got an alternative to IBM 3270 printers by connecting this type of printers through printer protocol converters from manufactures like I-data, MPI Tech, Adacom, and others. The printer protocol converters basically emulate a 3287 type printer, and later extended to support IPDS.",
"title": "Models"
},
{
"paragraph_id": 65,
"text": "The IBM 3482 terminal, announced in 1992, offered a printer port, which could be used for host addressable printing as well as local screen copy.",
"title": "Models"
},
{
"paragraph_id": 66,
"text": "In the later versions of 3174 the Asynchronous Emulation Adapter (AEA), supporting async RS-232 character-based type terminals, was enhanced to support printers equipped with a serial interface.",
"title": "Models"
},
{
"paragraph_id": 67,
"text": "On the 3274 and 3174, IBM used the term configuration support letter, sometimes followed by a release number, to designate a list of features together with the hardware and microcode needed to support them.",
"title": "Models"
},
{
"paragraph_id": 68,
"text": "By 1994 the 3174 Establishment Controller supported features such as attachment to multiple hosts via Token Ring, Ethernet, or X.25 in addition to the standard channel attach or SDLC; terminal attachment via twisted pair, Token Ring or Ethernet in addition to co-ax; and TN3270. They also support attachment of asynchronous ASCII terminals, printers, and plotters alongside 3270 devices.",
"title": "Models"
},
{
"paragraph_id": 69,
"text": "IBM introduced the 3274 controller family in 1977, replacing the 3271–2 product line.",
"title": "Models"
},
{
"paragraph_id": 70,
"text": "Where the features of the 3271–2 was hardcoded, the 3274 was controlled by its microcode that was read from the 3274's built-in 8\" floppy drive.",
"title": "Models"
},
{
"paragraph_id": 71,
"text": "3274 models included 8, 12, 16, and 32 port remote controllers and 32-port local channel attached units. In total 16 different models were over time released to the market. The 3274-1A was an SNA physical Unit type 2.0 (PU2.0), required only a single address on the channel for all 32 devices and was not compatible with the 3272. The 3274-1B and 3274-1D were compatible with the 3272 and were referred to as local non-SNA models.",
"title": "Models"
},
{
"paragraph_id": 72,
"text": "The 3274 controllers introduced a new generation of the coax protocol, named Category A, to differentiate them from the Category B coax devices, such as the 3277 terminal and the 3284 printer. The first Category A coax devices were the 3278 and the first color terminal, the IBM 3279 Color Display Station.",
"title": "Models"
},
{
"paragraph_id": 73,
"text": "Enabling backward compatibility, it was possible to install coax boards, so-called 'panels', in groups of 4 or 8 supporting the now older Category B coax devices. A maximum of 16 Category B terminals could be supported, and only 8 if the controller were fully loaded with a maximum of 4 panels each supporting 8 Category A devices.",
"title": "Models"
},
{
"paragraph_id": 74,
"text": "During its life span, the 3274 supported several features including:",
"title": "Models"
},
{
"paragraph_id": 75,
"text": "IBM introduced the 3174 Subsystem Control Unit in 1986, replacing the 3274 product line.",
"title": "Models"
},
{
"paragraph_id": 76,
"text": "The 3174 was designed to enhance the 3270 product line with many new connectivity options and features. Like the 3274, it was customizable, the main difference was that it used smaller (5.25-inch) diskettes than the 3274 (8-inch diskettes), and that the larger floor models had 10 slots for adapters, some of them were per default occupied by channel adapter/serial interface, coax adapter, etc. Unlike the 3274, any local models could be configured as either local SNA or local non-SNA, including PU2.1 (APPN).",
"title": "Models"
},
{
"paragraph_id": 77,
"text": "The models included: 01L, 01R, 02R, 03R, 51R, 52R, 53R, 81R and 82R.",
"title": "Models"
},
{
"paragraph_id": 78,
"text": "The 01L were local channel attached, the R models remotely connected, and the x3R Token Ring (upstream) connected. The 0xL/R models were floor units supporting up to 32 coax devices through the use of internal or external multiplexers (TMA/3299). The 5xR, models were shelf units with 9 coax ports, expandable to 16, by the connection of a 3299 multiplexer. The smallest desktop units, 8xR, had 4 coax ports expandable to 8, by the connection of a 3299 multiplexer.",
"title": "Models"
},
{
"paragraph_id": 79,
"text": "In the 3174 controller line IBM also slightly altered the classical BNC coax connector by changing the BNC connector to DPC – Dual Purpose Connector. The DPC female connector was a few millimeters longer and with a built-in switch that detected if a normal BNC connector were connected or a newer DPC connector was connected, thereby changing the physical layer from 93 ohm unbalanced coax, to 150 ohm balanced twisted-pair, thereby directly supporting the IBM Cabling system without the need for a so-called red balun.",
"title": "Models"
},
{
"paragraph_id": 80,
"text": "Configuration Support A was the first microcode offered with the 3174. It supported all the hardware modules present at the time, almost all the microcode features found in 3274 and introduced a number of new features including: Intelligent Printer Data Stream (IPDS), Multiple Logical Terminals, Country Extended Code Page (CECP), Response Time Monitor, and Token Ring configured as host interface.",
"title": "Models"
},
{
"paragraph_id": 81,
"text": "Configuration Support S, strangely following release A, introduced that a local or remote controller could act as 3270 Token-Ring DSPU Gateway, supporting up to 80 Downstream PU's.",
"title": "Models"
},
{
"paragraph_id": 82,
"text": "In 1989, IBM introduced a new range of 3174 models and changed the name from 3174 Subsystem Control Unit to 3174 Establishment Controller. The main new feature was support for an additional 32 coax port in floor models.",
"title": "Models"
},
{
"paragraph_id": 83,
"text": "The models included: 11L, 11R, 12R, 13R, 61R, 62R, 63R, 91R, and 92R.",
"title": "Models"
},
{
"paragraph_id": 84,
"text": "The new line of controllers came with Configuration Support B release 1, increased the number of supported DSPU on the Token-Ring gateway to 250 units, and introduced at the same time 'Group Polling' that offloaded the mainframe/VTAM polling requirement on the channel.",
"title": "Models"
},
{
"paragraph_id": 85,
"text": "Configuration Support B release 2 to 5, enabled features like: Local Format Storage (CICS Screen Buffer), Type Ahead, Null/Space Processing, ESCON channel support.",
"title": "Models"
},
{
"paragraph_id": 86,
"text": "In 1990–1991, a total of 7 more models were added: 21R, 21L, 12L, 22L, 22R, 23R, and 90R. The 12L offered ESCON fibreoptic channel attachment. The models with 2xx designation were equal to the 1xx models but repacked for rackmount and offered only 4 adapter slots. The 90R was not intended as a coax controller, it was positioned as a Token Ring 3270 DSPU gateway. However, it did have one coax port for configuring the unit, which with a 3299 multiplexer could be expanded to 8.",
"title": "Models"
},
{
"paragraph_id": 87,
"text": "The line of controllers came with Configuration Support C to support ISDN, APPN and Peer Communication. The ISDN feature allowed downstream devices, typically PC's, to connect to the 3174 via the ISDN network. The APPN support enabled the 3174 to be a part of an APPN network, and the Peer Communication allowed coax attached PC's with 'Peer Communication Support' to access resources on the Token-Ring network attached to the 3174.",
"title": "Models"
},
{
"paragraph_id": 88,
"text": "The subsequent releases 2 to 6 of Configuration Support C enables support for: Split screen, Copy from session to session, Calculator function, Access to AS/400 host and 5250 keyboard emulation, Numerous APPN enhancements, TCP/IP Telnet support that allowed 3270 CUT terminals to communicate with TCP/IP servers using Telnet, and at the same time in another screen to communicate with the mainframe using native 3270. TN3270 support where the 3174 could connect to a TN3270 host/gateway, eliminating SNA, but preserving the 3270 data stream. IP forwarding allowing bridging of LAN (Token-Ring or Ethernet) connected devices downstream to the 3174 to route IP traffic onto the Frame Relay WAN interface.",
"title": "Models"
},
{
"paragraph_id": 89,
"text": "In 1993, three new models were added with the announcement of Ethernet Adapter (FC 3045). The models were: 14R, 24R, and 64R.",
"title": "Models"
},
{
"paragraph_id": 90,
"text": "This was also IBM's final hardware announcement of 3174.",
"title": "Models"
},
{
"paragraph_id": 91,
"text": "The floor models, and the rack-mountable units, could be expanded with a range of special 3174 adapters, that by 1993 included: Channel adapter, ESCON adapter, Serial (V.24/V.35) adapter, Concurrent Communication Adapter, Coax adapter, Fiber optic \"coax\" adapter, Async adapter, ISDN adapter, Token-Ring adapter, Ethernet adapter, and line encryption adapter.",
"title": "Models"
},
{
"paragraph_id": 92,
"text": "In 1994, IBM incorporated the functions of RPQ 8Q0935 into Configuration Support-C release 3, including the TN3270 client.",
"title": "Models"
},
{
"paragraph_id": 93,
"text": "The GTE IS/7800 Video Display Systems used one of two nominally IBM compatible controllers:",
"title": "Models"
},
{
"paragraph_id": 94,
"text": "The Harris 8000 Series Terminal Systems used one of four controllers:",
"title": "Models"
},
{
"paragraph_id": 95,
"text": "An alternative implementation of an establishment controller exists in form of OEC (Open Establishment Controller). It's a combination of an Arduino shield with a BNC connector and a Python program that runs on a POSIX system. OEC allows to connect a 3270 display to IBM mainframes via TN3270 or to other systems via VT100. Currently only CUT but not DFT displays are supported.",
"title": "Models"
},
{
"paragraph_id": 96,
"text": "Memorex had two controllers for its 3277-compatible 1377; the 1371 for remote connection and the 1372 for local connection.",
"title": "Models"
},
{
"paragraph_id": 97,
"text": "Later Memorex offered a series of controllers compatible with the IBM 3274 and 3174",
"title": "Models"
},
{
"paragraph_id": 98,
"text": "IBM offered a device called 3299 that acted as a multiplexer between an accordingly configured 3274 controller, with the 9901 multiplexer feature, and up to eight displays/printers, thereby reducing the number of coax cables between the 3x74 controller and the displays/printers.",
"title": "Models"
},
{
"paragraph_id": 99,
"text": "With the introduction of the 3174 controller internal or external multiplexers (3299) became mainstream as the 3174-1L controller was equipped with four multiplexed ports each supporting eight devices. The internal 3174 multiplexer card was named TMA – Terminal Multiplexer adapter 9176.",
"title": "Models"
},
{
"paragraph_id": 100,
"text": "A number of vendors manufactured 3270 multiplexers before and alongside IBM including Fibronics and Adacom offering multiplexers that supported TTP – Telephone Twisted Pair as an alternative to coax, and fiber-optic links between the multiplexers.",
"title": "Models"
},
{
"paragraph_id": 101,
"text": "In some instances, the multiplexer worked as an \"expansion\" unit on smaller remote controllers including the 3174-81R / 91R, where the 3299 expanded the number of coax ports from four to eight, or the 3174-51R / 61R, where the 3299 expanded the number of coax ports from eight to 16.",
"title": "Models"
},
{
"paragraph_id": 102,
"text": "The IBM 3270 display terminal subsystem was designed and developed by IBM's Kingston, New York, laboratory (which later closed during IBM's difficult time in the mid-1990s). The printers were developed by the Endicott, New York, laboratory. As the subsystem expanded, the 3276 display-controller was developed by the Fujisawa laboratory, Japan, and later the Yamato laboratory; and the 3279 color display and 3287 color printer by the Hursley, UK, laboratory. The subsystem products were manufactured in Kingston (displays and controllers), Endicott (printers), and Greenock, Scotland, UK, (most products) and shipped to users in U.S. and worldwide. 3278 terminals continued to be manufactured in Hortolândia, near Campinas, Brazil as far as late 1980s, having its internals redesigned by a local engineering team using modern CMOS technology, while retaining its external look and feel.",
"title": "Manufacture"
},
{
"paragraph_id": 103,
"text": "Telnet 3270, or tn3270 describes both the process of sending and receiving 3270 data streams using the telnet protocol and the software that emulates a 3270 class terminal that communicates using that process. tn3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Telnet 3270 can be used for either terminal or print connections. Standard telnet clients cannot be used as a substitute for tn3270 clients, as they use fundamentally different techniques for exchanging data.",
"title": "Telnet 3270"
},
{
"paragraph_id": 104,
"text": "The 3270 displays are available with a variety of keyboards and character sets. The following table shows the 3275/3277/3284–3286 character set for US English EBCDIC (optional characters were available for US ASCII, and UK, French, German, and Italian EBCDIC).",
"title": "Technical Information"
},
{
"paragraph_id": 105,
"text": "On the 3275 and 3277 terminals without the a text feature, lower case characters display as uppercase. NL, EM, DUP, and FM control characters display and print as 5, 9, *, and ; characters, respectively, except by the printer when WCC or CCC bits 2 and 3 = '00'b, in which case NL and EM serve their control function and do not print.",
"title": "Technical Information"
},
{
"paragraph_id": 106,
"text": "Data sent to the 3270 consist of commands, a Copy Control Character (CCC) or Write Control Character (WCC) if appropriate, a device address for copy, orders, character data and structured fields. Commands instruct the 3270 control unit to perform some action on a specified device, such as a read or write. Orders are sent as part of the data stream to control the format of the device buffer. Structured fields are to convey additional control functions and data to or from the terminal.",
"title": "Technical Information"
},
{
"paragraph_id": 107,
"text": "On a local non-SNA controller, the command is a CCW opcode rather than the first byte of the outbound display stream; on all other controllers, the command is the first byte of the display stream, exclusive of protocol headers.",
"title": "Technical Information"
},
{
"paragraph_id": 108,
"text": "The following table includes datastream commands and CCW opcodes for local non-SNA controllers; it does not include CCW opcodes for local SNA controllers.",
"title": "Technical Information"
},
{
"paragraph_id": 109,
"text": "The data sent by Write or Erase/Write consists of the command code itself followed by a Write Control Character (WCC) optionally followed by a buffer containing orders or data (or both). The WCC controls the operation of the device. Bits may start printer operation and specify a print format. Other bit settings will sound the audible alarm if installed, unlock the keyboard to allow operator entry, or reset all the Modified Data Tags in the device buffer.",
"title": "Technical Information"
},
{
"paragraph_id": 110,
"text": "Orders consist of the order code byte followed by zero to three bytes of variable information.",
"title": "Technical Information"
},
{
"paragraph_id": 111,
"text": "The 3270 has three kinds of attributes:",
"title": "Technical Information"
},
{
"paragraph_id": 112,
"text": "The original 3277 and 3275 displays used an 8-bit field attribute byte of which five bits were used.",
"title": "Technical Information"
},
{
"paragraph_id": 113,
"text": "Later models include base color: \"Base color (four colors) can be produced on color displays and color printers from current 3270 application programs by use of combinations of the field intensify and field protection attribute bits. For more information on color, refer to IBM 3270 Information System: Color and Programmed Symbols, GA33-3056.\"",
"title": "Technical Information"
},
{
"paragraph_id": 114,
"text": "The 3278 and 3279 and later models used extended attributes to add support for seven colors, blinking, reverse video, underscoring, field outlining, field validation, and programmed symbols.",
"title": "Technical Information"
},
{
"paragraph_id": 115,
"text": "The 3278 and 3279 and later models allowed attributes on individual characters in a field to override the corresponding field attributes.",
"title": "Technical Information"
},
{
"paragraph_id": 116,
"text": "This allowed programs (such as the LEXX text editor) to assign any font (including the programmable fonts), colour, etc. to any character on the screen.",
"title": "Technical Information"
},
{
"paragraph_id": 117,
"text": "3270 displays and printers have a buffer containing one byte for every screen position. For example, a 3277 model 2 featured a screen size of 24 rows of 80 columns for a buffer size of 1920 bytes. Bytes are addressed from zero to the screen size minus one, in this example 1919. \"There is a fixed relationship between each ... buffer storage location and its position on the display screen.\" Most orders start operation at the \"current\" buffer address, and executing an order or writing data will update this address. The buffer address can be set directly using the Set Buffer Address (SBA) order, often followed by Start Field or Start Field Extended. For a device with a 1920 character display a twelve bit address is sufficient. Later 3270s with larger screen sizes use fourteen or sixteen bits.",
"title": "Technical Information"
},
{
"paragraph_id": 118,
"text": "Addresses are encoded within orders in two bytes. For twelve bit addresses the high order two bits of each byte are set to form valid EBCDIC (or ASCII) characters. For example, address 0 is coded as X'4040', or space-space, address 1919 is coded as X'5D7F', or '\"'. Programmers hand-coding panels usually keep the table of addresses from the 3270 Component Description or the 3270 Reference Card handy. For fourteen and sixteen-bit address, the address uses contiguous bits in two bytes.",
"title": "Technical Information"
},
{
"paragraph_id": 119,
"text": "The following data stream writes an attribute in row 24, column 1, writes the (protected) characters '> ' in row 24, columns 2 and 3, and creates an unprotected field on row 24 from columns 5-79. Because the buffer wraps around an attribute is placed on row 24, column 80 to terminate the input field. This data stream would normally be written using an Erase/Write command which would set undefined positions on the screen to '00'x. Values are given in hexadecimal.",
"title": "Technical Information"
},
{
"paragraph_id": 120,
"text": "Most 3270 terminals newer than the 3275, 3277, 3284 and 3286 support an extended data stream (EDS) that allows many new capabilities, including:",
"title": "Technical Information"
}
]
| The IBM 3270 is a family of block oriented display and printer computer terminals introduced by IBM in 1971 and normally used to communicate with IBM mainframes. The 3270 was the successor to the IBM 2260 display terminal. Due to the text color on the original models, these terminals are informally known as green screen terminals. Unlike a character-oriented terminal, the 3270 minimizes the number of I/O interrupts required by transferring large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coaxial cable. IBM no longer manufactures 3270 terminals, but the IBM 3270 protocol is still commonly used via TN3270 clients, 3270 terminal emulation or web interfaces to access mainframe-based applications, which are sometimes referred to as green screen applications. | 2001-10-12T18:46:52Z | 2023-12-08T04:17:22Z | [
"Template:N/a",
"Template:Authority control",
"Template:Sfn",
"Template:Redirect",
"Template:Cvt",
"Template:Rp",
"Template:Cbignore",
"Template:Not a typo",
"Template:Cite manual",
"Template:Cite web",
"Template:Cite book",
"Template:IETF RFC",
"Template:Use American English",
"Template:Chset-table-header1",
"Template:Chset-left1",
"Template:Chset-cell1",
"Template:Expand section",
"Template:Cite magazine",
"Template:Short description",
"Template:Infobox information appliance",
"Template:Cite newsgroup",
"Template:Distinguish",
"Template:Citation needed",
"Template:Chset-ctrl1",
"Template:Notelist",
"Template:Cite IETF",
"Template:Reflist",
"Template:Use mdy dates",
"Template:Efn",
"Template:Clarify",
"Template:Larger"
]
| https://en.wikipedia.org/wiki/IBM_3270 |
15,155 | I. M. Pei | Ieoh Ming Pei FAIA RIBA (/ˌjoʊ mɪŋ ˈpeɪ/ YOH ming PAY; Chinese: 貝聿銘; pinyin: Bèi Yùmíng; April 26, 1917 – May 16, 2019) was a Chinese-American architect. Raised in Shanghai, Pei drew inspiration at an early age from the garden villas at Suzhou, the traditional retreat of the scholar-gentry to which his family belonged. In 1935, he moved to the United States and enrolled in the University of Pennsylvania's architecture school, but he quickly transferred to the Massachusetts Institute of Technology. He was unhappy with the focus on Beaux-Arts architecture at both schools, and spent his free time researching emerging architects, especially Le Corbusier.
After graduating, he joined the Harvard Graduate School of Design (GSD) and became a friend of the Bauhaus architects Walter Gropius and Marcel Breuer. In 1948, Pei was recruited by New York City real estate magnate William Zeckendorf, for whom he worked for seven years before establishing an independent design firm, I. M. Pei & Associates, in 1955. In 1966, that became I. M. Pei & Partners, and became Pei Cobb Freed & Partners in 1989. Pei retired from full-time practice in 1990. In his retirement, he worked as an architectural consultant primarily from his sons' architectural firm Pei Partnership Architects.
Pei's first major recognition came with the Mesa Laboratory at the National Center for Atmospheric Research in Colorado (designed in 1961, and completed in 1967). His new stature led to his selection as chief architect for the John F. Kennedy Library in Massachusetts. He went on to design Dallas City Hall and the East Building of the National Gallery of Art. He returned to China for the first time in 1975 to design a hotel at Fragrant Hills and, fifteen years later, designed Bank of China Tower, Hong Kong, a skyscraper in Hong Kong for the Bank of China.
In the early 1980s, Pei was the focus of controversy when he designed a glass-and-steel pyramid for the Louvre in Paris. He later returned to the world of the arts by designing the Morton H. Meyerson Symphony Center in Dallas, the Miho Museum in Japan, Shigaraki, near Kyoto, and the chapel of the junior and high school: MIHO Institute of Aesthetics, the Suzhou Museum in Suzhou, Museum of Islamic Art in Qatar, and the Grand Duke Jean Museum of Modern Art, abbreviated to Mudam, in Luxembourg.
Pei won a wide variety of prizes and awards in the field of architecture, including the AIA Gold Medal in 1979, the first Praemium Imperiale for Architecture in 1989, and the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, in 2003. In 1983, he won the Pritzker Prize, which is sometimes referred to as the Nobel Prize of architecture.
I. M. Pei's ancestry traces back to the Ming dynasty, when his family moved from Anhui to Suzhou. The family made their wealth in medicinal herbs, then joined the ranks of the scholar-gentry. Pei Ieoh Ming was born on April 26, 1917, to Tsuyee and Lien Kwun, and the family moved to Hong Kong one year later. It eventually included five children. As a boy, Pei was very close to his mother, a devout Buddhist, who was recognized for her skills as a flautist. She invited him, but not his brothers or sisters, to join her on meditation retreats. His relationship with his father was less intimate. Their interactions were respectful but distant.
Pei's ancestors' success meant that the family lived in the upper echelons of society, but Pei said his father was "not cultivated in the ways of the arts". The younger Pei, drawn more to music and other cultural forms than to his father's domain of banking, explored art on his own. "I have cultivated myself," he said later.
Pei studied in St. Paul's College in Hong Kong as a child. When Pei was 10, his father received a promotion and relocated with his family to Shanghai. Pei attended St. John's Middle School, the secondary school of St. John's University that was run by Anglican missionaries. Academic discipline was rigorous; students were allowed only one half-day each month for leisure. Pei enjoyed playing billiards and watching Hollywood movies, especially those of Buster Keaton and Charlie Chaplin. He also learned rudimentary English skills by reading the Bible and novels by Charles Dickens.
Shanghai's many international elements gave it the name "Paris of the East". The city's global architectural flavors had a profound influence on Pei, from The Bund waterfront area to the Park Hotel, built in 1934. He was also impressed by the many gardens of Suzhou, where he spent the summers with extended family and regularly visited a nearby ancestral shrine. The Shizilin Garden, built in the 14th century by a Buddhist monk and owned by Pei's uncle Bei Runsheng, was especially influential. Its unusual rock formations, stone bridges, and waterfalls remained etched in Pei's memory for decades. He spoke later of his fondness for the garden's blending of natural and human-built structures.
Soon after the move to Shanghai, Pei's mother developed cancer. As a pain reliever, she was prescribed opium, and assigned the task of preparing her pipe to Pei. She died shortly after his thirteenth birthday, and he was profoundly upset. The children were sent to live with extended family, as their father became more consumed by his work and more physically distant. Pei said: "My father began living his own separate life pretty soon after that." His father later married a woman named Aileen, who moved to New York later in her life.
As Pei neared the end of his secondary education, he decided to study at a university. He was accepted by a number of schools, but decided to enrol at the University of Pennsylvania. Pei's choice had two roots. While studying in Shanghai, he had closely examined the catalogs for various institutions of higher learning around the world. The architectural program at the University of Pennsylvania stood out to him. The other major factor was Hollywood. Pei was fascinated by the representations of college life in the films of Bing Crosby, which differed tremendously from the academic atmosphere in China. "College life in the U.S. seemed to me to be mostly fun and games", he said in 2000. "Since I was too young to be serious, I wanted to be part of it ... You could get a feeling for it in Bing Crosby's movies. College life in America seemed very exciting to me. It's not real, we know that. Nevertheless, at that time it was very attractive to me. I decided that was the country for me." Pei added that "Crosby's films in particular had a tremendous influence on my choosing the United States instead of England to pursue my education."
In 1935, Pei boarded a boat and sailed to San Francisco, then traveled by train to Philadelphia. What he found once he arrived, however, differed vastly from his expectations. Professors at the University of Pennsylvania based their teaching in the Beaux-Arts style, rooted in the classical traditions of ancient Greece and Rome. Pei was more intrigued by modern architecture, and also felt intimidated by the high level of drafting proficiency shown by other students. He decided to abandon architecture and transferred to the engineering program at Massachusetts Institute of Technology (MIT). Once he arrived, however, the dean of the architecture school commented on his eye for design and convinced Pei to return to his original major.
MIT's architecture faculty was also focused on the Beaux-Arts school, and Pei found himself uninspired by the work. In the library he found three books by the Swiss-French architect Le Corbusier. Pei was inspired by the innovative designs of the new International Style, characterized by simplified form and the use of glass and steel materials. Le Corbusier visited MIT in November 1935, an occasion which powerfully affected Pei: "The two days with Le Corbusier, or 'Corbu' as we used to call him, were probably the most important days in my architectural education." Pei was also influenced by the work of U.S. architect Frank Lloyd Wright. In 1938 he drove to Spring Green, Wisconsin, to visit Wright's famous Taliesin building. After waiting for two hours, however, he left without meeting Wright.
Although he disliked the Beaux-Arts emphasis at MIT, Pei excelled in his studies. "I certainly don't regret the time at MIT", he said later. "There I learned the science and technique of building, which is just as essential to architecture." Pei received his BArch degree in 1940; his thesis was titled "Standardized Propaganda Units for War Time and Peace Time China".
While visiting New York City in the late 1930s, Pei met a Wellesley College student named Eileen Loo. They began dating and married in the spring of 1942. She enrolled in the landscape architecture program at Harvard University, and Pei was thus introduced to members of the faculty at Harvard's Graduate School of Design (GSD). He was excited by the lively atmosphere and joined the GSD in December 1942.
Less than a month later, Pei suspended his work at Harvard to join the National Defense Research Committee, which coordinated scientific research into U.S. weapons technology during World War II. Pei's background in architecture was seen as a considerable asset; one member of the committee told him: "If you know how to build you should also know how to destroy." The fight against Germany was ending, so he focused on the Pacific War. The U.S. realized that its bombs used against the stone buildings of Europe would be ineffective against Japanese cities, mostly constructed from wood and paper; Pei was assigned to work on incendiary bombs. Pei spent two and a half years with the NDRC, but revealed few details of his work.
In 1945, Eileen gave birth to a son, T'ing Chung, and she withdrew from the landscape architecture program in order to care for him. Pei returned to Harvard in the autumn of 1945, and received a position as assistant professor of design. The GSD was developing into a hub of resistance to the Beaux-Arts orthodoxy. At the center were members of the Bauhaus, a European architectural movement that had advanced the cause of modernist design. The Nazi regime had condemned the Bauhaus school, and its leaders left Germany. Two of them, Walter Gropius and Marcel Breuer, took positions at the Harvard GSD. Their iconoclastic focus on modern architecture appealed to Pei, and he worked closely with both men.
One of Pei's design projects at the GSD was a plan for an art museum in Shanghai. He wanted to create a mood of Chinese authenticity in the architecture without using traditional materials or styles. The design was based on straight modernist structures, organized around a central courtyard garden, with other similar natural settings arranged nearby. It was very well received, with Gropius calling it "the best thing done in [my] master class." Pei received his MArch degree in 1946, and taught at Harvard for another two years.
In the spring of 1948, Pei was recruited by New York real estate magnate William Zeckendorf to join a staff of architects for his firm of Webb and Knapp to design buildings around the country. Pei found Zeckendorf's personality the opposite of his own; his new boss was known for his loud speech and gruff demeanor. Nevertheless, they became good friends and Pei found the experience personally enriching. Zeckendorf was well connected politically, and Pei enjoyed learning about the social world of New York's city planners.
His first project for Webb and Knapp was an apartment building, which received funding from the Housing Act of 1949. Pei's design was based on a circular tower with concentric rings. The areas closest to the supporting pillar handled utilities and circulation, and the apartments themselves were located toward the outer edge. Zeckendorf loved the design and even showed it off to Le Corbusier when they met. The cost of such an unusual design was too high, however, and the building never progressed beyond the model stage.
Pei finally saw his architecture come to life in 1949, when he designed a two-story corporate building for Gulf Oil in Atlanta, Georgia. The building was demolished in February 2013 although the front façade was retained as part of an apartment development. His use of marble for the exterior curtain wall brought praise from the journal Architectural Forum. Pei's designs echoed the work of Mies van der Rohe in the beginning of his career as also shown in his own weekend-house in Katonah, New York in 1952. Soon, Pei was so inundated with projects that he asked Zeckendorf for assistants, which he chose from his associates at the GSD, including Henry N. Cobb and Ulrich Franzen. They set to work on a variety of proposals, including the Roosevelt Field Shopping Mall on Long Island. The team also redesigned the Webb and Knapp office building, transforming Zeckendorf's office into a circular space with teak walls and a glass clerestory. They also installed a control panel into the desk that allowed their boss to control the lighting in his office. The project took one year and exceeded its budget, but Zeckendorf was delighted with the results.
In 1952, Pei and his team began work on a series of projects in Denver, Colorado. The first of these was the Mile High Center, which compressed the core building into less than 25 percent of the total site; the rest is adorned with an exhibition hall and fountain-dotted plazas. One block away, Pei's team also redesigned Denver's Courthouse Square, which combined office spaces, commercial venues, and hotels. These projects helped Pei conceptualize architecture as part of the larger urban geography. "I learned the process of development," he said later, "and about the city as a living organism." These lessons, he said, became essential for later projects.
Pei and his team also designed a united urban area for Washington, D.C., called L'Enfant Plaza (named for French-American architect Pierre Charles L'Enfant). Pei's associate Araldo Cossutta was the lead architect for the plaza's North Building (955 L'Enfant Plaza SW) and South Building (490 L'Enfant Plaza SW). Vlastimil Koubek was the architect for the East Building (L'Enfant Plaza Hotel, located at 480 L'Enfant Plaza SW), and for the Center Building (475 L'Enfant Plaza SW; now the United States Postal Service headquarters). The team set out with a broad vision that was praised by both The Washington Post and Washington Star (which rarely agreed on anything), but funding problems forced revisions and a significant reduction in scale.
In 1955, Pei's group took a step toward institutional independence from Webb and Knapp by establishing a new firm called I. M. Pei & Associates. (The name changed later to I. M. Pei & Partners.) They gained the freedom to work with other companies, but continued working primarily with Zeckendorf. The new firm distinguished itself through the use of detailed architectural models. They took on the Kips Bay residential area on the East Side of Manhattan, where Pei set up Kips Bay Towers, two large long towers of apartments with recessed windows (to provide shade and privacy) in a neat grid, adorned with rows of trees. Pei involved himself in the construction process at Kips Bay, even inspecting the bags of cement to check for consistency of color.
The company continued its urban focus with the Society Hill project in central Philadelphia. Pei designed the Society Hill Towers, a three-building residential block injecting cubist design into the 18th-century milieu of the neighborhood. As with previous projects, abundant green spaces were central to Pei's vision, which also added traditional townhouses to aid the transition from classical to modern design.
From 1958 to 1963, Pei and Ray Affleck developed a key downtown block of Montreal in a phased process that involved one of Pei's most admired structures in the Commonwealth, the cruciform tower known as the Royal Bank Plaza (Place Ville Marie). According to The Canadian Encyclopedia "its grand plaza and lower office buildings, designed by internationally famous US architect I. M. Pei, helped to set new standards for architecture in Canada in the 1960s ... The tower's smooth aluminum and glass surface and crisp unadorned geometric form demonstrate Pei's adherence to the mainstream of 20th-century modern design."
Although those projects were satisfying, Pei wanted to establish an independent name for himself. In 1959, he was approached by MIT to design a building for its Earth science program. The Green Building continued the grid design of Kips Bay and Society Hill. The pedestrian walkway on the ground floor, however, was prone to sudden gusts of wind, which embarrassed Pei. "Here I was from MIT," he said, "and I didn't know about wind-tunnel effects." At the same time, he co-designed the Luce Memorial Chapel at Tunghai University in Taichung, Taiwan. The soaring structure, commissioned by the same organization that had run his middle school in Shanghai, broke severely from the cubist grid patterns of his urban projects.
The challenge of coordinating those projects took an artistic toll on Pei. He found himself responsible for acquiring new building contracts and supervising the plans for them. As a result, he felt disconnected from the actual creative work. "Design is something you have to put your hand to," he said. "While my people had the luxury of doing one job at a time, I had to keep track of the whole enterprise." Pei's dissatisfaction reached its peak at a time when financial problems began plaguing Zeckendorf's firm. I. M. Pei and Associates officially broke from Webb and Knapp in 1960, which benefited Pei creatively but pained him personally. He had developed a close friendship with Zeckendorf, and both men were sad to part ways.
Pei was able to return to hands-on design when he was approached in 1961 by Walter Orr Roberts to design the new Mesa Laboratory for the National Center for Atmospheric Research outside Boulder, Colorado. The project differed from Pei's earlier urban work because it rested in an open area in the foothills of the Rocky Mountains. He drove around the region with his wife, visiting assorted buildings and surveying the natural environs. He was impressed by the United States Air Force Academy in Colorado Springs, but felt it was "detached from nature".
The conceptualization stages were important for Pei, presenting a need and an opportunity to break from the Bauhaus tradition. He later recalled the long periods of time he spent in the area: "I recalled the places I had seen with my mother when I was a little boy—the mountaintop Buddhist retreats. There in the Colorado mountains, I tried to listen to the silence again—just as my mother had taught me. The investigation of the place became a kind of religious experience for me." Pei also drew inspiration from the Mesa Verde cliff dwellings of the Ancestral Puebloans; he wanted the buildings to exist in harmony with their natural surroundings. To this end, he called for a rock-treatment process that could color the buildings to match the nearby mountains. He also set the complex back on the mesa overlooking the city, and designed the approaching road to be long, winding, and indirect.
Roberts disliked Pei's initial designs, referring to them as "just a bunch of towers". Roberts intended his comments as typical of scientific experimentation, rather than artistic critique, but Pei was frustrated. His second attempt, however, fitted Roberts' vision perfectly: a spaced-out series of clustered buildings, joined by lower structures and complemented by two underground levels. The complex used many elements of cubist design, and the walkways were arranged to increase the probability of casual encounters among colleagues.
Once the laboratory was built, several problems with its construction became apparent. Leaks in the roof caused difficulties for researchers, and the shifting of clay soil beneath the building caused cracks which were expensive to repair. Still, both architect and project manager were pleased with the final result. Pei referred to the NCAR complex as his "breakout building", and he remained a friend of Roberts until the scientist died in March 1990.
The success of NCAR brought renewed attention to Pei's design acumen. He was recruited to work on a variety of projects, including the S. I. Newhouse School of Public Communications at Syracuse University, the Everson Museum of Art in Syracuse, New York, the Sundrome terminal at John F. Kennedy International Airport in New York City, and dormitories at New College of Florida.
After President John F. Kennedy was assassinated in November 1963, his family and friends discussed how to construct a library that would serve as a fitting memorial. A committee was formed to advise Kennedy's widow Jacqueline, who would make the final decision. The group deliberated for months and considered many famous architects. Eventually, Kennedy chose Pei to design the library, based on two considerations. First, she appreciated the variety of ideas he had used for earlier projects. "He didn't seem to have just one way to solve a problem," she said. "He seemed to approach each commission thinking only of it and then develop a way to make something beautiful." Ultimately, however, Kennedy made her choice based on her personal connection with Pei. Calling it "really an emotional decision", she explained: "He was so full of promise, like Jack; they were born in the same year. I decided it would be fun to take a great leap with him."
The project was plagued with problems from the outset. The first was scope. President Kennedy had begun considering the structure of his library soon after taking office, and he wanted to include archives from his administration, a museum of personal items, and a political science institute. After the assassination, the list expanded to include a fitting memorial tribute to the slain president. The variety of necessary inclusions complicated the design process and caused significant delays.
Pei's first proposed design included a large glass pyramid that would fill the interior with sunlight, meant to represent the optimism and hope that Kennedy's administration had symbolized for so many in the United States. Mrs. Kennedy liked the design, but resistance began in Cambridge, the first proposed site for the building, as soon as the project was announced. Many community members worried that the library would become a tourist attraction, causing particular problems with traffic congestion. Others worried that the design would clash with the architectural feel of nearby Harvard Square. By the mid-1970s, Pei tried proposing a new design, but the library's opponents resisted every effort. These events pained Pei, who had sent all three of his sons to Harvard, and although he rarely discussed his frustration, it was evident to his wife. "I could tell how tired he was by the way he opened the door at the end of the day," she said. "His footsteps were dragging. It was very hard for I. M. to see that so many people didn't want the building."
Finally the project moved to Columbia Point, near the University of Massachusetts Boston. The new site was less than ideal; it was located on an old landfill, and just over a large sewage pipe. Pei's architectural team added more fill to cover the pipe and developed an elaborate ventilation system to conquer the odor. A new design was unveiled, combining a large square glass-enclosed atrium with a triangular tower and a circular walkway.
The John F. Kennedy Presidential Library and Museum was dedicated on October 20, 1979. Critics generally liked the finished building, but the architect himself was unsatisfied. The years of conflict and compromise had changed the nature of the design, and Pei felt that the final result lacked its original passion. "I wanted to give something very special to the memory of President Kennedy," he said in 2000. "It could and should have been a great project." Pei's work on the Kennedy project boosted his reputation as an architect of note.
The Pei Plan was a failed urban redevelopment initiative designed for downtown Oklahoma City, Oklahoma, in 1964. The plan called for the demolition of hundreds of old downtown structures in favor of renewed parking, office building, and retail developments, in addition to public projects such as the Myriad Convention Center and the Myriad Botanical Gardens. It was the dominant template for downtown development in Oklahoma City from its inception through the 1970s. The plan generated mixed results and opinion, largely succeeding in re-developing office building and parking infrastructure but failing to attract its anticipated retail and residential development. Significant public resentment also developed as a result of the destruction of multiple historic structures. As a result, Oklahoma City's leadership avoided large-scale urban planning for downtown throughout the 1980s and early 1990s, until the passage of the Metropolitan Area Projects (MAPS) initiative in 1993.
Another city which turned to Pei for urban renewal during this time was Providence, Rhode Island. In the late 1960s, Providence hired Pei to redesign Cathedral Square, a once-bustling civic center which had become neglected and empty, as part of an ambitious larger plan to redesign downtown. Pei's new plaza, modeled after the Greek Agora marketplace, opened in 1972. The city ran out of money before Pei's vision could be fully realized. Also, recent construction of a low-income housing complex and Interstate 95 had changed the neighborhood's character permanently. In 1974, The Providence Evening Bulletin called Pei's new plaza a "conspicuous failure". By 2016, media reports characterized the plaza as a neglected, little-visited "hidden gem".
In 1974, the city of Augusta, Georgia turned to Pei and his firm for downtown revitalization. The Chamber of Commerce building and Bicentennial Park were completed from his plan. In 1976, Pei designed a distinctive modern penthouse that was added to the roof of architect William Lee Stoddart's historic Lamar Building, designed in 1916. In 1980, Pei and his company designed the Augusta Civic Center, now known as the James Brown Arena.
Kennedy's assassination also led indirectly to another commission for Pei's firm. In 1964 the acting mayor of Dallas, Erik Jonsson, began working to change the community's image. Dallas was known and disliked as the city where the president had been killed, but Jonsson began a program designed to initiate a community renewal. One of the goals was a new city hall, which could be a "symbol of the people". Jonsson, a co-founder of Texas Instruments, learned about Pei from his associate Cecil Howard Green, who had recruited the architect for MIT's Earth Sciences building.
Pei's approach to the new Dallas City Hall mirrored those of other projects; he surveyed the surrounding area and worked to make the building fit. In the case of Dallas, he spent days meeting with residents of the city and was impressed by their civic pride. He also found that the skyscrapers of the downtown business district dominated the skyline, and sought to create a building which could face the tall buildings and represent the importance of the public sector. He spoke of creating "a public-private dialogue with the commercial high-rises".
Working with his associate Theodore Musho, Pei developed a design centered on a building with a top much wider than the bottom; the facade leans at an angle of 34 degrees, which shades the building from the Texas sun. A plaza stretches out before the building, and a series of support columns holds it up. It was influenced by Le Corbusier's High Court building in Chandigarh, India; Pei sought to use the significant overhang to unify the building and plaza. The project cost much more than initially expected, and took 11 years to complete. Revenue was secured in part by including a subterranean parking garage. The interior of the city hall is large and spacious; windows in the ceiling above the eighth floor fill the main space with light.
The city of Dallas received the building well, and a local television news crew found unanimous approval of the new city hall when it officially opened to the public in 1978. Pei himself considered the project a success, even as he worried about the arrangement of its elements. He said: "It's perhaps stronger than I would have liked; it's got more strength than finesse." He felt that his relative lack of experience left him without the necessary design tools to refine his vision, but the community liked the city hall enough to invite him back. Over the years he went on to design five additional buildings in the Dallas area.
While Pei and Musho were coordinating the Dallas project, their associate Henry Cobb had taken the helm for a commission in Boston. John Hancock Insurance chairman Robert Slater hired I. M. Pei & Partners to design a building that could overshadow the Prudential Tower, erected by their rival.
After the firm's first plan was discarded due to a need for more office space, Cobb developed a new plan around a towering parallelogram, slanted away from the Trinity Church and accented by a wedge cut into each narrow side. To minimize the visual impact, the building was covered in large reflective glass panels; Cobb said this would make the building a "background and foil" to the older structures around it. When the Hancock Tower was finished in 1976, it was the tallest building in New England.
Serious issues of execution became evident in the tower almost immediately. Many glass panels fractured in a windstorm during construction in 1973. Some detached and fell to the ground, causing no injuries but sparking concern among Boston residents. In response, the entire tower was reglazed with smaller panels. This significantly increased the cost of the project. Hancock sued the glass manufacturers, Libbey-Owens-Ford, as well as I. M. Pei & Partners, for submitting plans that were "not good and workmanlike". LOF countersued Hancock for defamation, accusing Pei's firm of poor use of their materials; I. M. Pei & Partners sued LOF in return. All three companies settled out of court in 1981.
The project became an albatross for Pei's firm. Pei himself refused to discuss it for many years. The pace of new commissions slowed and the firm's architects began looking overseas for opportunities. Cobb worked in Australia and Pei took on jobs in Singapore, Iran, and Kuwait. Although it was a difficult time for everyone involved, Pei later reflected with patience on the experience. "Going through this trial toughened us," he said. "It helped to cement us as partners; we did not give up on each other."
In the mid-1960s, directors of the National Gallery of Art in Washington, D.C., declared the need for a new building. Paul Mellon, a primary benefactor of the gallery and a member of its building committee, set to work with his assistant J. Carter Brown (who became gallery director in 1969) to find an architect. The new structure would be located to the east of the original building, and tasked with two functions: offer a large space for public appreciation of various popular collections; and house office space as well as archives for scholarship and research. They likened the scope of the new facility to the Library of Alexandria. After inspecting Pei's work at the Des Moines Art Center in Iowa and the Johnson Museum at Cornell University, they offered him the commission.
Pei took to the project with vigor, and set to work with two young architects he had recently recruited to the firm, William Pedersen and Yann Weymouth. Their first obstacle was the unusual shape of the building site, a trapezoid of land at the intersection of Constitution and Pennsylvania Avenues. Inspiration struck Pei in 1968, when he scrawled a rough diagram of two triangles on a scrap of paper. The larger building would be the public gallery; the smaller would house offices and archives. This triangular shape became a singular vision for the architect. As the date for groundbreaking approached, Pedersen suggested to his boss that a slightly different approach would make construction easier. Pei simply smiled and said: "No compromises."
The growing popularity of art museums presented unique challenges to the architecture. Mellon and Pei both expected large crowds of people to visit the new building, and they planned accordingly. To this end, Pei designed a large lobby roofed with enormous skylights. Individual galleries are located along the periphery, allowing visitors to return after viewing each exhibit to the spacious main room. A large mobile sculpture by American artist Alexander Calder was later added to the lobby. Pei hoped the lobby would be exciting to the public in the same way as the central room of the Guggenheim Museum is in New York City. The modern museum, he said later, "must pay greater attention to its educational responsibility, especially to the young".
Materials for the building's exterior were chosen with careful precision. To match the look and texture of the original gallery's marble walls, builders re-opened the quarry in Knoxville, Tennessee, from which the first batch of stone had been harvested. The project even found and hired Malcolm Rice, a quarry supervisor who had overseen the original 1941 gallery project. The marble was cut into three-inch-thick blocks and arranged over the concrete foundation, with darker blocks at the bottom and lighter blocks on top.
The East Building was honored on May 30, 1978, two days before its public unveiling, with a black-tie party attended by celebrities, politicians, benefactors, and artists. When the building opened, popular opinion was enthusiastic. Large crowds visited the new museum, and critics generally voiced their approval. Ada Louise Huxtable wrote in The New York Times that Pei's building was "a palatial statement of the creative accommodation of contemporary art and architecture". The sharp angle of the smaller building has been a particular note of praise for the public; over the years it has become stained and worn from the hands of visitors.
Some critics disliked the unusual design, however, and criticized the reliance on triangles throughout the building. Others took issue with the large main lobby, particularly its attempt to lure casual visitors. In his review for Artforum, critic Richard Hennessy described a "shocking fun-house atmosphere" and "aura of ancient Roman patronage". One of the earliest and most vocal critics, however, came to appreciate the new gallery once he saw it in person. Allan Greenberg had scorned the design when it was first unveiled, but wrote later to J. Carter Brown: "I am forced to admit that you are right and I was wrong! The building is a masterpiece."
After U.S. President Richard Nixon made his famous 1972 visit to China, a wave of exchanges took place between the two countries. One of these was a delegation of the American Institute of Architects in 1974, which Pei joined. It was his first trip back to China since leaving in 1935. He was favorably received, returned the welcome with positive comments, and a series of lectures ensued. Pei noted in one lecture that since the 1950s Chinese architects had been content to imitate Western styles; he urged his audience in one lecture to search China's native traditions for inspiration.
In 1978, Pei was asked to initiate a project for his home country. After surveying a number of different locations, Pei fell in love with a valley that had once served as an imperial garden and hunting preserve known as Fragrant Hills. The site housed a decrepit hotel; Pei was invited to tear it down and build a new one. As usual, he approached the project by carefully considering the context and purpose. Likewise, he considered modernist styles inappropriate for the setting. Thus, he said, it was necessary to find "a third way".
After visiting his ancestral home in Suzhou, Pei created a design based on some simple but nuanced techniques he admired in traditional residential Chinese buildings. Among these were abundant gardens, integration with nature, and consideration of the relationship between enclosure and opening. Pei's design included a large central atrium covered by glass panels that functioned much like the large central space in his East Building of the National Gallery. Openings of various shapes in walls invited guests to view the natural scenery beyond. Younger Chinese who had hoped the building would exhibit some of Cubist flavor for which Pei had become known were disappointed, but the new hotel found more favor with government officials and architects.
The hotel, with 325 guest rooms and a four-story central atrium, was designed to fit perfectly into its natural habitat. The trees in the area were of special concern, and particular care was taken to cut down as few as possible. He worked with an expert from Suzhou to preserve and renovate a water maze from the original hotel, one of only five in the country. Pei was also meticulous about the arrangement of items in the garden behind the hotel; he even insisted on transporting 230 short tons (210 t) of rocks from a location in southwest China to suit the natural aesthetic. An associate of Pei's said later that he never saw the architect so involved in a project.
During construction, a series of mistakes collided with the nation's lack of technology to strain relations between architects and builders. Whereas 200 or so workers might have been used for a similar building in the US, the Fragrant Hill project employed over 3,000 workers. This was mostly because the construction company lacked the sophisticated machines used in other parts of the world. The problems continued for months, until Pei had an uncharacteristically emotional moment during a meeting with Chinese officials. He later explained that his actions included "shouting and pounding the table" in frustration. The design staff noticed a difference in the manner of work among the crew after the meeting. As the opening neared, however, Pei found the hotel still needed work. He began scrubbing floors with his wife and ordered his children to make beds and vacuum floors. The project's difficulties took an emotional and physical strain on the Pei family.
The Fragrant Hill Hotel opened on October 17, 1982, but quickly fell into disrepair. A member of Pei's staff returned for a visit several years later and confirmed the dilapidated condition of the hotel. He and Pei attributed this to the country's general unfamiliarity with deluxe buildings. The Chinese architectural community at the time gave the structure little attention, as their interest at the time centered on the work of American postmodernists such as Michael Graves.
As the Fragrant Hill project neared completion, Pei began work on the Jacob K. Javits Convention Center in New York City, for which his associate James Freed served as lead designer. Hoping to create a vibrant community institution in what was then a run-down neighborhood on Manhattan's west side, Freed developed a glass-coated structure with an intricate space frame of interconnected metal rods and spheres.
The convention center was plagued from the start by budget problems and construction blunders. City regulations forbid a general contractor having final authority over the project, so architects and program manager Richard Kahan had to coordinate the wide array of builders, plumbers, electricians, and other workers. The forged steel globes to be used in the space frame came to the site with hairline cracks and other defects: 12,000 were rejected. These and other problems led to media comparisons with the disastrous Hancock Tower. One New York City official blamed Kahan for the difficulties, indicating that the building's architectural flourishes were responsible for delays and financial crises. The Javits Center opened on April 3, 1986, to a generally positive reception. During the inauguration ceremonies, however, neither Freed nor Pei was recognized for their role in the project.
When François Mitterrand was elected President of France in 1981, he laid out an ambitious plan for a variety of construction projects. One of these was the renovation of the Louvre. Mitterrand appointed a civil servant named Émile Biasini [fr] to oversee it. After visiting museums in Europe and the United States, including the U.S. National Gallery, he asked Pei to join the team. The architect made three secretive trips to Paris, to determine the feasibility of the project; only one museum employee knew why he was there. Pei finally agreed that a new construction project was not only possible, but necessary for the future of the museum. He thus became the first foreign architect to work on the Louvre.
The heart of the new design included not only a renovation of the Cour Napoléon in the midst of the buildings, but also a transformation of the interiors. Pei proposed a central entrance, not unlike the lobby of the National Gallery East Building, which would link the three major wings around the central space. Below would be a complex of additional floors for research, storage, and maintenance purposes. At the center of the courtyard he designed a glass and steel pyramid, first proposed with the Kennedy Library, to serve as entrance and anteroom skylight. It was mirrored by an inverted pyramid to the west, to reflect sunlight into the complex. These designs were partly an homage to the fastidious geometry of the French landscape architect André Le Nôtre (1613–1700). Pei also found the pyramid shape best suited for stable transparency, and considered it "most compatible with the architecture of the Louvre, especially with the faceted planes of its roofs".
Biasini and Mitterrand liked the plans, but the scope of the renovation displeased Louvre administrator André Chabaud. He resigned from his post, complaining that the project was "unfeasible" and posed "architectural risks". Some sections of the French public also reacted harshly to the design, mostly because of the proposed pyramid. One critic called it a "gigantic, ruinous gadget"; another charged Mitterrand with "despotism" for inflicting Paris with the "atrocity". Pei estimated that 90 percent of Parisians opposed his design. "I received many angry glances in the streets of Paris," he said. Some condemnations carried nationalistic overtones. One opponent wrote: "I am surprised that one would go looking for a Chinese architect in America to deal with the historic heart of the capital of France."
Soon, however, Pei and his team won the support of several key cultural icons, including the conductor Pierre Boulez and Claude Pompidou, widow of former French President Georges Pompidou, after whom the similarly controversial Centre Georges Pompidou was named. In an attempt to soothe public ire, Pei took a suggestion from then-mayor of Paris Jacques Chirac and placed a full-sized cable model of the pyramid in the courtyard. During the four days of its exhibition, an estimated 60,000 people visited the site. Some critics eased their opposition after witnessing the proposed scale of the pyramid.
Pei demanded a method of glass production that resulted in clear panes. The pyramid was constructed at the same time as the subterranean levels below, which caused difficulties during the building stages. As they worked, construction teams came upon an abandoned set of rooms containing 25,000 historical items; these were incorporated into the rest of the structure to add a new exhibition zone.
The new Cour Napoléon was opened to the public on October 14, 1988, and the Pyramid entrance was opened the following March. By this time, public opposition had softened; a poll found a 56 percent approval rating for the pyramid, with 23 percent still opposed. The newspaper Le Figaro had vehemently criticized Pei's design, but later celebrated the tenth anniversary of its magazine supplement at the pyramid. Prince Charles of Britain surveyed the new site with curiosity, and declared it "marvelous, very exciting". A writer in Le Quotidien de Paris wrote: "The much-feared pyramid has become adorable."
The experience was exhausting for Pei, but also rewarding. "After the Louvre," he said later, "I thought no project would be too difficult." The pyramid achieved further widespread international recognition for its central role in the plot at the denouement of The Da Vinci Code by Dan Brown and its appearance in the final scene of the subsequent screen adaptation. The Louvre Pyramid became Pei's most famous structure.
The opening of the Louvre Pyramid coincided with four other projects on which Pei had been working, prompting architecture critic Paul Goldberger to declare 1989 "the year of Pei" in The New York Times. It was also the year in which Pei's firm changed its name to Pei Cobb Freed & Partners, to reflect the increasing stature and prominence of his associates. At the age of 72, Pei had begun thinking about retirement, but continued working long hours to see his designs come to light.
One of the projects took Pei back to Dallas, Texas, to design the Morton H. Meyerson Symphony Center. The success of city's performing artists, particularly the Dallas Symphony Orchestra then led by conductor Eduardo Mata, led to interest by city leaders in creating a modern center for musical arts that could rival the best halls in Europe. The organizing committee contacted 45 architects, but at first Pei did not respond, thinking that his work on the Dallas City Hall had left a negative impression. One of his colleagues from that project, however, insisted that he meet with the committee. He did and, although it would be his first concert hall, the committee voted unanimously to offer him the commission. As one member put it: "We were convinced that we would get the world's greatest architect putting his best foot forward."
The project presented a variety of specific challenges. Because its main purpose was the presentation of live music, the hall needed a design focused on acoustics first, then public access and exterior aesthetics. To this end, a professional sound technician was hired to design the interior. He proposed a shoebox auditorium, used in the acclaimed designs of top European symphony halls such as the Amsterdam Concertgebouw and Vienna Musikverein. Pei drew inspiration for his adjustments from the designs of the German architect Johann Balthasar Neumann, especially the Basilica of the Fourteen Holy Helpers. He also sought to incorporate some of the panache of the Paris Opéra designed by Charles Garnier.
Pei's design placed the rigid shoebox at an angle to the surrounding street grid, connected at the north end to a long rectangular office building, and cut through the middle with an assortment of circles and cones. The design attempted to reproduce with modern features the acoustic and visual functions of traditional elements like filigree. The project was risky: its goals were ambitious and any unforeseen acoustic flaws would be virtually impossible to remedy after the hall's completion. Pei admitted that he did not completely know how everything would come together. "I can imagine only 60 percent of the space in this building," he said during the early stages. "The rest will be as surprising to me as to everyone else." As the project developed, costs rose steadily and some sponsors considered withdrawing their support. Billionaire tycoon Ross Perot made a donation of US$10 million, on the condition that it be named in honor of Morton H. Meyerson, the longtime patron of the arts in Dallas.
The building opened and immediately garnered widespread praise, especially for its acoustics. After attending a week of performances in the hall, a music critic for The New York Times wrote an enthusiastic account of the experience and congratulated the architects. One of Pei's associates told him during a party before the opening that the symphony hall was "a very mature building"; he smiled and replied: "Ah, but did I have to wait this long?"
A new offer had arrived for Pei from the Chinese government in 1982. With an eye toward the transfer of sovereignty over Hong Kong from the British in 1997, authorities in China sought Pei's aid on a new tower for the local branch of the Bank of China. The Chinese government was preparing for a new wave of engagement with the outside world and sought a tower to represent modernity and economic strength. Given the elder Pei's history with the bank before the Communist takeover, government officials visited the 89-year-old man in New York to gain approval for his son's involvement. Pei then spoke with his father at length about the proposal. Although the architect remained pained by his experience with Fragrant Hills, he agreed to accept the commission.
The proposed site in Hong Kong's Central District was less than ideal; a tangle of highways lined it on three sides. The area had also been home to a headquarters for Japanese military police during World War II, and was notorious for prisoner torture. The small parcel of land made a tall tower necessary, and Pei had usually shied away from such projects; in Hong Kong especially, the skyscrapers lacked any real architectural character. Lacking inspiration and unsure of how to approach the building, Pei took a weekend vacation to the family home in Katonah, New York. There he found himself experimenting with a bundle of sticks until he happened upon a cascading sequence.
Pei felt that his design for the Bank of China Tower needed to reflect "the aspirations of the Chinese people". The design that he developed for the skyscraper was not only unique in appearance, but also sound enough to pass the city's rigorous standards for wind-resistance. The building is composed of four triangular shafts rising up from a square base, supported by a visible truss structure that distributes stress to the four corners of the base. Using the reflective glass that had become something of a trademark for him, Pei organized the facade around diagonal bracing in a union of structure and form that reiterates the triangle motif established in the plan. At the top, he designed the roofs at sloping angles to match the rising aesthetic of the building. Some influential advocates of feng shui in Hong Kong and China criticized the design, and Pei and government officials responded with token adjustments.
As the tower neared completion, Pei was shocked to witness the government's massacre of unarmed civilians at the Tiananmen Square protests of 1989. He wrote an opinion piece for The New York Times titled "China Won't Ever Be the Same", in which he said that the killings "tore the heart out of a generation that carries the hope for the future of the country". The massacre deeply disturbed his entire family, and he wrote that "China is besmirched."
As the 1990s began, Pei transitioned into a role of decreased involvement with his firm. The staff had begun to shrink, and Pei wanted to dedicate himself to smaller projects allowing for more creativity. Before he made this change, however, he set to work on his last major project as active partner: the Rock and Roll Hall of Fame in Cleveland, Ohio. Considering his work on such bastions of high culture as the Louvre and U.S. National Gallery, some critics were surprised by his association with what many considered a tribute to low culture. The sponsors of the hall, however, sought Pei for specifically this reason; they wanted the building to have an aura of respectability from the beginning. As in the past, Pei accepted the commission in part because of the unique challenge it presented.
Using a glass wall for the entrance, similar in appearance to his Louvre pyramid, Pei coated the exterior of the main building in white metal, and placed a large cylinder on a narrow perch to serve as a performance space. The combination of off-centered wraparounds and angled walls was, Pei said, designed to provide "a sense of tumultuous youthful energy, rebelling, flailing about".
The building opened in 1995, and was received with moderate praise. The New York Times called it "a fine building", but Pei was among those who felt disappointed with the results. The museum's early beginnings in New York combined with an unclear mission created a fuzzy understanding among project leaders for precisely what was needed. Although the city of Cleveland benefited greatly from the new tourist attraction, Pei was unhappy with it.
At the same time, Pei designed a new museum for Luxembourg, the Musée d'art moderne Grand-Duc Jean, commonly known as the Mudam. Drawing from the original shape of the Fort Thüngen walls where the museum was located, Pei planned to remove a portion of the original foundation. Public resistance to the historical loss forced a revision of his plan, however, and the project was nearly abandoned. The size of the building was halved, and it was set back from the original wall segments to preserve the foundation. Pei was disappointed with the alterations, but remained involved in the building process even during construction.
In 1995, Pei was hired to design an extension to the Deutsches Historisches Museum, or German Historical Museum in Berlin. Returning to the challenge of the East Building of the U.S. National Gallery, Pei worked to combine a modernist approach with a classical main structure. He described the glass cylinder addition as a "beacon", and topped it with a glass roof to allow plentiful sunlight inside. Pei had difficulty working with German government officials on the project; their utilitarian approach clashed with his passion for aesthetics. "They thought I was nothing but trouble", he said.
Pei also worked at this time on two projects for a new Japanese religious movement called Shinji Shumeikai. He was approached by the movement's spiritual leader, Kaishu Koyama, who impressed the architect with her sincerity and willingness to give him significant artistic freedom. One of the buildings was a bell tower, designed to resemble the bachi used when playing traditional instruments like the shamisen. Pei was unfamiliar with the movement's beliefs, but explored them in order to represent something meaningful in the tower. As he said: "It was a search for the sort of expression that is not at all technical."
The experience was rewarding for Pei, and he agreed immediately to work with the group again. The new project was the Miho Museum, to display Koyama's collection of tea ceremony artifacts. Pei visited the site in Shiga Prefecture, and during their conversations convinced Koyama to expand her collection. She conducted a global search and acquired more than 300 items showcasing the history of the Silk Road.
One major challenge was the approach to the museum. The Japanese team proposed a winding road up the mountain, not unlike the approach to the NCAR building in Colorado. Instead, Pei ordered a hole cut through a nearby mountain, connected to a major road via a bridge suspended from ninety-six steel cables and supported by a post set into the mountain. The museum itself was built into the mountain, with 80 percent of the building underground.
When designing the exterior, Pei borrowed from the tradition of Japanese temples, particularly those found in nearby Kyoto. He created a concise spaceframe wrapped into French limestone and covered with a glass roof. Pei also oversaw specific decorative details, including a bench in the entrance lobby, carved from a 350-year-old keyaki tree. Because of Koyama's considerable wealth, money was rarely considered an obstacle; estimates at the time of completion put the cost of the project at US$350 million.
During the first decade of the 2000s, Pei designed a variety of buildings, including the Suzhou Museum near his childhood home. He also designed the Museum of Islamic Art in Doha, Qatar, at the request of the Al-Thani Family. Although it was originally planned for the corniche road along Doha Bay, Pei convinced the project coordinators to build a new island to provide the needed space. He then spent six months touring the region and surveying mosques in Spain, Syria, and Tunisia. He was especially impressed with the elegant simplicity of the Mosque of Ibn Tulun in Cairo.
Once again, Pei sought to combine new design elements with the classical aesthetic most appropriate for the location of the building. The sand-colored rectangular boxes rotate evenly to create a subtle movement, with small arched windows at regular intervals into the limestone exterior. Inside, galleries are arranged around a massive atrium, lit from above. The museum's coordinators were pleased with the project; its official website describes its "true splendour unveiled in the sunlight," and speaks of "the shades of colour and the interplay of shadows paying tribute to the essence of Islamic architecture".
The Macao Science Center in Macau was designed by Pei Partnership Architects in association with I. M. Pei. The project to build the science center was conceived in 2001 and construction started in 2006. The center was completed in 2009 and opened by the Chinese President Hu Jintao. The main part of the building is a distinctive conical shape with a spiral walkway and large atrium inside, similar to that of the Solomon R. Guggenheim Museum in New York City. Galleries lead off the walkway, mainly consisting of interactive exhibits aimed at science education. The building is in a prominent position by the sea and is now a Macau landmark.
Pei's career ended with his death in May 2019, at 102 years of age.
Pei's style was described as thoroughly modernist, with significant cubist themes. He was known for combining traditional architectural principles with progressive designs based on simple geometric patterns—circles, squares, and triangles are common elements of his work in both plan and elevation. As one critic wrote: "Pei has been aptly described as combining a classical sense of form with a contemporary mastery of method." In 2000, biographer Carter Wiseman called Pei "the most distinguished member of his Late-Modernist generation still in practice". At the same time, Pei himself rejected simple dichotomies of architectural trends. He once said: "The talk about modernism versus post-modernism is unimportant. It's a side issue. An individual building, the style in which it is going to be designed and built, is not that important. The important thing, really, is the community. How does it affect life?"
Pei's work is celebrated throughout the world of architecture. His colleague John Portman once told him: "Just once, I'd like to do something like the East Building." But this originality did not always bring large financial reward; as Pei replied to the successful architect: "Just once, I'd like to make the kind of money you do." His concepts, moreover, were too individualized and dependent on context to have given rise to a particular school of design. Pei referred to his own "analytical approach" when explaining the lack of a "Pei School".
"For me," he said, "the important distinction is between a stylistic approach to the design; and an analytical approach giving the process of due consideration to time, place, and purpose ... My analytical approach requires a full understanding of the three essential elements ... to arrive at an ideal balance among them."
In the words of his biographer, Pei won "every award of any consequence in his art", including the Arnold Brunner Award from the National Institute of Arts and Letters (1963), the Gold Medal for Architecture from the American Academy of Arts and Letters (1979), the AIA Gold Medal (1979), the first Praemium Imperiale for Architecture from the Japan Art Association (1989), the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, the 1998 Edward MacDowell Medal in the Arts, and the 2010 Royal Gold Medal from the Royal Institute of British Architects. In 1983 he was awarded the Pritzker Prize, sometimes referred to as the Nobel Prize of architecture. In its citation, the jury said: "Ieoh Ming Pei has given this century some of its most beautiful interior spaces and exterior forms ... His versatility and skill in the use of materials approach the level of poetry." The prize was accompanied by a US$100,000 award, which Pei used to create a scholarship for Chinese students to study architecture in the U.S., on the condition that they return to China to work. In 1986, he was one of twelve recipients of the Medal of Liberty. When he was awarded the 2003 Henry C. Turner Prize by the National Building Museum, museum board chair Carolyn Brody praised his impact on construction innovation: "His magnificent designs have challenged engineers to devise innovative structural solutions, and his exacting expectations for construction quality have encouraged contractors to achieve high standards." In December 1992, Pei was awarded the Presidential Medal of Freedom by President George H. W. Bush. In 1996, Pei became the first person to be elected a foreign member of the Chinese Academy of Engineering. Pei was also an elected member of the American Academy of Arts and Sciences and the American Philosophical Society.
Pei's wife of over 70 years, Eileen Loo, died on June 20, 2014. Together they had three sons, T'ing Chung (1945–2003), Chien Chung ( 1946-2023; known as Didi), and Li Chung (b. 1949; known as Sandi); and a daughter, Liane (b. 1960). T'ing Chung was an urban planner and alumnus of his father's alma mater MIT and Harvard. Chieng Chung and Li Chung, who are both Harvard College and Harvard Graduate School of Design alumni, founded and run Pei Partnership Architects. Liane is a lawyer.
In 2015, Pei's home health aide, Eter Nikolaishvili, grabbed Pei's right forearm and twisted it, resulting in bruising and bleeding and hospital treatment. Pei alleges that the assault occurred when Pei threatened to call the police about Nikolaishvili. Nikolaishvili agreed to plead guilty in 2016.
Pei celebrated his 100th birthday on April 26, 2017. He died at his Manhattan apartment on May 16, 2019, at the age of 102. He was survived by his three remaining adult children as well as seven grandchildren, and five great-grandchildren.
In the 2021 parody film America: The Motion Picture, I. M. Pei was voiced by David Callaham. | [
{
"paragraph_id": 0,
"text": "Ieoh Ming Pei FAIA RIBA (/ˌjoʊ mɪŋ ˈpeɪ/ YOH ming PAY; Chinese: 貝聿銘; pinyin: Bèi Yùmíng; April 26, 1917 – May 16, 2019) was a Chinese-American architect. Raised in Shanghai, Pei drew inspiration at an early age from the garden villas at Suzhou, the traditional retreat of the scholar-gentry to which his family belonged. In 1935, he moved to the United States and enrolled in the University of Pennsylvania's architecture school, but he quickly transferred to the Massachusetts Institute of Technology. He was unhappy with the focus on Beaux-Arts architecture at both schools, and spent his free time researching emerging architects, especially Le Corbusier.",
"title": ""
},
{
"paragraph_id": 1,
"text": "After graduating, he joined the Harvard Graduate School of Design (GSD) and became a friend of the Bauhaus architects Walter Gropius and Marcel Breuer. In 1948, Pei was recruited by New York City real estate magnate William Zeckendorf, for whom he worked for seven years before establishing an independent design firm, I. M. Pei & Associates, in 1955. In 1966, that became I. M. Pei & Partners, and became Pei Cobb Freed & Partners in 1989. Pei retired from full-time practice in 1990. In his retirement, he worked as an architectural consultant primarily from his sons' architectural firm Pei Partnership Architects.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Pei's first major recognition came with the Mesa Laboratory at the National Center for Atmospheric Research in Colorado (designed in 1961, and completed in 1967). His new stature led to his selection as chief architect for the John F. Kennedy Library in Massachusetts. He went on to design Dallas City Hall and the East Building of the National Gallery of Art. He returned to China for the first time in 1975 to design a hotel at Fragrant Hills and, fifteen years later, designed Bank of China Tower, Hong Kong, a skyscraper in Hong Kong for the Bank of China.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the early 1980s, Pei was the focus of controversy when he designed a glass-and-steel pyramid for the Louvre in Paris. He later returned to the world of the arts by designing the Morton H. Meyerson Symphony Center in Dallas, the Miho Museum in Japan, Shigaraki, near Kyoto, and the chapel of the junior and high school: MIHO Institute of Aesthetics, the Suzhou Museum in Suzhou, Museum of Islamic Art in Qatar, and the Grand Duke Jean Museum of Modern Art, abbreviated to Mudam, in Luxembourg.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Pei won a wide variety of prizes and awards in the field of architecture, including the AIA Gold Medal in 1979, the first Praemium Imperiale for Architecture in 1989, and the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, in 2003. In 1983, he won the Pritzker Prize, which is sometimes referred to as the Nobel Prize of architecture.",
"title": ""
},
{
"paragraph_id": 5,
"text": "I. M. Pei's ancestry traces back to the Ming dynasty, when his family moved from Anhui to Suzhou. The family made their wealth in medicinal herbs, then joined the ranks of the scholar-gentry. Pei Ieoh Ming was born on April 26, 1917, to Tsuyee and Lien Kwun, and the family moved to Hong Kong one year later. It eventually included five children. As a boy, Pei was very close to his mother, a devout Buddhist, who was recognized for her skills as a flautist. She invited him, but not his brothers or sisters, to join her on meditation retreats. His relationship with his father was less intimate. Their interactions were respectful but distant.",
"title": "Childhood"
},
{
"paragraph_id": 6,
"text": "Pei's ancestors' success meant that the family lived in the upper echelons of society, but Pei said his father was \"not cultivated in the ways of the arts\". The younger Pei, drawn more to music and other cultural forms than to his father's domain of banking, explored art on his own. \"I have cultivated myself,\" he said later.",
"title": "Childhood"
},
{
"paragraph_id": 7,
"text": "Pei studied in St. Paul's College in Hong Kong as a child. When Pei was 10, his father received a promotion and relocated with his family to Shanghai. Pei attended St. John's Middle School, the secondary school of St. John's University that was run by Anglican missionaries. Academic discipline was rigorous; students were allowed only one half-day each month for leisure. Pei enjoyed playing billiards and watching Hollywood movies, especially those of Buster Keaton and Charlie Chaplin. He also learned rudimentary English skills by reading the Bible and novels by Charles Dickens.",
"title": "Childhood"
},
{
"paragraph_id": 8,
"text": "Shanghai's many international elements gave it the name \"Paris of the East\". The city's global architectural flavors had a profound influence on Pei, from The Bund waterfront area to the Park Hotel, built in 1934. He was also impressed by the many gardens of Suzhou, where he spent the summers with extended family and regularly visited a nearby ancestral shrine. The Shizilin Garden, built in the 14th century by a Buddhist monk and owned by Pei's uncle Bei Runsheng, was especially influential. Its unusual rock formations, stone bridges, and waterfalls remained etched in Pei's memory for decades. He spoke later of his fondness for the garden's blending of natural and human-built structures.",
"title": "Childhood"
},
{
"paragraph_id": 9,
"text": "Soon after the move to Shanghai, Pei's mother developed cancer. As a pain reliever, she was prescribed opium, and assigned the task of preparing her pipe to Pei. She died shortly after his thirteenth birthday, and he was profoundly upset. The children were sent to live with extended family, as their father became more consumed by his work and more physically distant. Pei said: \"My father began living his own separate life pretty soon after that.\" His father later married a woman named Aileen, who moved to New York later in her life.",
"title": "Childhood"
},
{
"paragraph_id": 10,
"text": "As Pei neared the end of his secondary education, he decided to study at a university. He was accepted by a number of schools, but decided to enrol at the University of Pennsylvania. Pei's choice had two roots. While studying in Shanghai, he had closely examined the catalogs for various institutions of higher learning around the world. The architectural program at the University of Pennsylvania stood out to him. The other major factor was Hollywood. Pei was fascinated by the representations of college life in the films of Bing Crosby, which differed tremendously from the academic atmosphere in China. \"College life in the U.S. seemed to me to be mostly fun and games\", he said in 2000. \"Since I was too young to be serious, I wanted to be part of it ... You could get a feeling for it in Bing Crosby's movies. College life in America seemed very exciting to me. It's not real, we know that. Nevertheless, at that time it was very attractive to me. I decided that was the country for me.\" Pei added that \"Crosby's films in particular had a tremendous influence on my choosing the United States instead of England to pursue my education.\"",
"title": "Education and formative years"
},
{
"paragraph_id": 11,
"text": "In 1935, Pei boarded a boat and sailed to San Francisco, then traveled by train to Philadelphia. What he found once he arrived, however, differed vastly from his expectations. Professors at the University of Pennsylvania based their teaching in the Beaux-Arts style, rooted in the classical traditions of ancient Greece and Rome. Pei was more intrigued by modern architecture, and also felt intimidated by the high level of drafting proficiency shown by other students. He decided to abandon architecture and transferred to the engineering program at Massachusetts Institute of Technology (MIT). Once he arrived, however, the dean of the architecture school commented on his eye for design and convinced Pei to return to his original major.",
"title": "Education and formative years"
},
{
"paragraph_id": 12,
"text": "MIT's architecture faculty was also focused on the Beaux-Arts school, and Pei found himself uninspired by the work. In the library he found three books by the Swiss-French architect Le Corbusier. Pei was inspired by the innovative designs of the new International Style, characterized by simplified form and the use of glass and steel materials. Le Corbusier visited MIT in November 1935, an occasion which powerfully affected Pei: \"The two days with Le Corbusier, or 'Corbu' as we used to call him, were probably the most important days in my architectural education.\" Pei was also influenced by the work of U.S. architect Frank Lloyd Wright. In 1938 he drove to Spring Green, Wisconsin, to visit Wright's famous Taliesin building. After waiting for two hours, however, he left without meeting Wright.",
"title": "Education and formative years"
},
{
"paragraph_id": 13,
"text": "Although he disliked the Beaux-Arts emphasis at MIT, Pei excelled in his studies. \"I certainly don't regret the time at MIT\", he said later. \"There I learned the science and technique of building, which is just as essential to architecture.\" Pei received his BArch degree in 1940; his thesis was titled \"Standardized Propaganda Units for War Time and Peace Time China\".",
"title": "Education and formative years"
},
{
"paragraph_id": 14,
"text": "While visiting New York City in the late 1930s, Pei met a Wellesley College student named Eileen Loo. They began dating and married in the spring of 1942. She enrolled in the landscape architecture program at Harvard University, and Pei was thus introduced to members of the faculty at Harvard's Graduate School of Design (GSD). He was excited by the lively atmosphere and joined the GSD in December 1942.",
"title": "Education and formative years"
},
{
"paragraph_id": 15,
"text": "Less than a month later, Pei suspended his work at Harvard to join the National Defense Research Committee, which coordinated scientific research into U.S. weapons technology during World War II. Pei's background in architecture was seen as a considerable asset; one member of the committee told him: \"If you know how to build you should also know how to destroy.\" The fight against Germany was ending, so he focused on the Pacific War. The U.S. realized that its bombs used against the stone buildings of Europe would be ineffective against Japanese cities, mostly constructed from wood and paper; Pei was assigned to work on incendiary bombs. Pei spent two and a half years with the NDRC, but revealed few details of his work.",
"title": "Education and formative years"
},
{
"paragraph_id": 16,
"text": "In 1945, Eileen gave birth to a son, T'ing Chung, and she withdrew from the landscape architecture program in order to care for him. Pei returned to Harvard in the autumn of 1945, and received a position as assistant professor of design. The GSD was developing into a hub of resistance to the Beaux-Arts orthodoxy. At the center were members of the Bauhaus, a European architectural movement that had advanced the cause of modernist design. The Nazi regime had condemned the Bauhaus school, and its leaders left Germany. Two of them, Walter Gropius and Marcel Breuer, took positions at the Harvard GSD. Their iconoclastic focus on modern architecture appealed to Pei, and he worked closely with both men.",
"title": "Education and formative years"
},
{
"paragraph_id": 17,
"text": "One of Pei's design projects at the GSD was a plan for an art museum in Shanghai. He wanted to create a mood of Chinese authenticity in the architecture without using traditional materials or styles. The design was based on straight modernist structures, organized around a central courtyard garden, with other similar natural settings arranged nearby. It was very well received, with Gropius calling it \"the best thing done in [my] master class.\" Pei received his MArch degree in 1946, and taught at Harvard for another two years.",
"title": "Education and formative years"
},
{
"paragraph_id": 18,
"text": "In the spring of 1948, Pei was recruited by New York real estate magnate William Zeckendorf to join a staff of architects for his firm of Webb and Knapp to design buildings around the country. Pei found Zeckendorf's personality the opposite of his own; his new boss was known for his loud speech and gruff demeanor. Nevertheless, they became good friends and Pei found the experience personally enriching. Zeckendorf was well connected politically, and Pei enjoyed learning about the social world of New York's city planners.",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "His first project for Webb and Knapp was an apartment building, which received funding from the Housing Act of 1949. Pei's design was based on a circular tower with concentric rings. The areas closest to the supporting pillar handled utilities and circulation, and the apartments themselves were located toward the outer edge. Zeckendorf loved the design and even showed it off to Le Corbusier when they met. The cost of such an unusual design was too high, however, and the building never progressed beyond the model stage.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "Pei finally saw his architecture come to life in 1949, when he designed a two-story corporate building for Gulf Oil in Atlanta, Georgia. The building was demolished in February 2013 although the front façade was retained as part of an apartment development. His use of marble for the exterior curtain wall brought praise from the journal Architectural Forum. Pei's designs echoed the work of Mies van der Rohe in the beginning of his career as also shown in his own weekend-house in Katonah, New York in 1952. Soon, Pei was so inundated with projects that he asked Zeckendorf for assistants, which he chose from his associates at the GSD, including Henry N. Cobb and Ulrich Franzen. They set to work on a variety of proposals, including the Roosevelt Field Shopping Mall on Long Island. The team also redesigned the Webb and Knapp office building, transforming Zeckendorf's office into a circular space with teak walls and a glass clerestory. They also installed a control panel into the desk that allowed their boss to control the lighting in his office. The project took one year and exceeded its budget, but Zeckendorf was delighted with the results.",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "In 1952, Pei and his team began work on a series of projects in Denver, Colorado. The first of these was the Mile High Center, which compressed the core building into less than 25 percent of the total site; the rest is adorned with an exhibition hall and fountain-dotted plazas. One block away, Pei's team also redesigned Denver's Courthouse Square, which combined office spaces, commercial venues, and hotels. These projects helped Pei conceptualize architecture as part of the larger urban geography. \"I learned the process of development,\" he said later, \"and about the city as a living organism.\" These lessons, he said, became essential for later projects.",
"title": "Career"
},
{
"paragraph_id": 22,
"text": "Pei and his team also designed a united urban area for Washington, D.C., called L'Enfant Plaza (named for French-American architect Pierre Charles L'Enfant). Pei's associate Araldo Cossutta was the lead architect for the plaza's North Building (955 L'Enfant Plaza SW) and South Building (490 L'Enfant Plaza SW). Vlastimil Koubek was the architect for the East Building (L'Enfant Plaza Hotel, located at 480 L'Enfant Plaza SW), and for the Center Building (475 L'Enfant Plaza SW; now the United States Postal Service headquarters). The team set out with a broad vision that was praised by both The Washington Post and Washington Star (which rarely agreed on anything), but funding problems forced revisions and a significant reduction in scale.",
"title": "Career"
},
{
"paragraph_id": 23,
"text": "In 1955, Pei's group took a step toward institutional independence from Webb and Knapp by establishing a new firm called I. M. Pei & Associates. (The name changed later to I. M. Pei & Partners.) They gained the freedom to work with other companies, but continued working primarily with Zeckendorf. The new firm distinguished itself through the use of detailed architectural models. They took on the Kips Bay residential area on the East Side of Manhattan, where Pei set up Kips Bay Towers, two large long towers of apartments with recessed windows (to provide shade and privacy) in a neat grid, adorned with rows of trees. Pei involved himself in the construction process at Kips Bay, even inspecting the bags of cement to check for consistency of color.",
"title": "Career"
},
{
"paragraph_id": 24,
"text": "The company continued its urban focus with the Society Hill project in central Philadelphia. Pei designed the Society Hill Towers, a three-building residential block injecting cubist design into the 18th-century milieu of the neighborhood. As with previous projects, abundant green spaces were central to Pei's vision, which also added traditional townhouses to aid the transition from classical to modern design.",
"title": "Career"
},
{
"paragraph_id": 25,
"text": "From 1958 to 1963, Pei and Ray Affleck developed a key downtown block of Montreal in a phased process that involved one of Pei's most admired structures in the Commonwealth, the cruciform tower known as the Royal Bank Plaza (Place Ville Marie). According to The Canadian Encyclopedia \"its grand plaza and lower office buildings, designed by internationally famous US architect I. M. Pei, helped to set new standards for architecture in Canada in the 1960s ... The tower's smooth aluminum and glass surface and crisp unadorned geometric form demonstrate Pei's adherence to the mainstream of 20th-century modern design.\"",
"title": "Career"
},
{
"paragraph_id": 26,
"text": "Although those projects were satisfying, Pei wanted to establish an independent name for himself. In 1959, he was approached by MIT to design a building for its Earth science program. The Green Building continued the grid design of Kips Bay and Society Hill. The pedestrian walkway on the ground floor, however, was prone to sudden gusts of wind, which embarrassed Pei. \"Here I was from MIT,\" he said, \"and I didn't know about wind-tunnel effects.\" At the same time, he co-designed the Luce Memorial Chapel at Tunghai University in Taichung, Taiwan. The soaring structure, commissioned by the same organization that had run his middle school in Shanghai, broke severely from the cubist grid patterns of his urban projects.",
"title": "Career"
},
{
"paragraph_id": 27,
"text": "The challenge of coordinating those projects took an artistic toll on Pei. He found himself responsible for acquiring new building contracts and supervising the plans for them. As a result, he felt disconnected from the actual creative work. \"Design is something you have to put your hand to,\" he said. \"While my people had the luxury of doing one job at a time, I had to keep track of the whole enterprise.\" Pei's dissatisfaction reached its peak at a time when financial problems began plaguing Zeckendorf's firm. I. M. Pei and Associates officially broke from Webb and Knapp in 1960, which benefited Pei creatively but pained him personally. He had developed a close friendship with Zeckendorf, and both men were sad to part ways.",
"title": "Career"
},
{
"paragraph_id": 28,
"text": "Pei was able to return to hands-on design when he was approached in 1961 by Walter Orr Roberts to design the new Mesa Laboratory for the National Center for Atmospheric Research outside Boulder, Colorado. The project differed from Pei's earlier urban work because it rested in an open area in the foothills of the Rocky Mountains. He drove around the region with his wife, visiting assorted buildings and surveying the natural environs. He was impressed by the United States Air Force Academy in Colorado Springs, but felt it was \"detached from nature\".",
"title": "Career"
},
{
"paragraph_id": 29,
"text": "The conceptualization stages were important for Pei, presenting a need and an opportunity to break from the Bauhaus tradition. He later recalled the long periods of time he spent in the area: \"I recalled the places I had seen with my mother when I was a little boy—the mountaintop Buddhist retreats. There in the Colorado mountains, I tried to listen to the silence again—just as my mother had taught me. The investigation of the place became a kind of religious experience for me.\" Pei also drew inspiration from the Mesa Verde cliff dwellings of the Ancestral Puebloans; he wanted the buildings to exist in harmony with their natural surroundings. To this end, he called for a rock-treatment process that could color the buildings to match the nearby mountains. He also set the complex back on the mesa overlooking the city, and designed the approaching road to be long, winding, and indirect.",
"title": "Career"
},
{
"paragraph_id": 30,
"text": "Roberts disliked Pei's initial designs, referring to them as \"just a bunch of towers\". Roberts intended his comments as typical of scientific experimentation, rather than artistic critique, but Pei was frustrated. His second attempt, however, fitted Roberts' vision perfectly: a spaced-out series of clustered buildings, joined by lower structures and complemented by two underground levels. The complex used many elements of cubist design, and the walkways were arranged to increase the probability of casual encounters among colleagues.",
"title": "Career"
},
{
"paragraph_id": 31,
"text": "Once the laboratory was built, several problems with its construction became apparent. Leaks in the roof caused difficulties for researchers, and the shifting of clay soil beneath the building caused cracks which were expensive to repair. Still, both architect and project manager were pleased with the final result. Pei referred to the NCAR complex as his \"breakout building\", and he remained a friend of Roberts until the scientist died in March 1990.",
"title": "Career"
},
{
"paragraph_id": 32,
"text": "The success of NCAR brought renewed attention to Pei's design acumen. He was recruited to work on a variety of projects, including the S. I. Newhouse School of Public Communications at Syracuse University, the Everson Museum of Art in Syracuse, New York, the Sundrome terminal at John F. Kennedy International Airport in New York City, and dormitories at New College of Florida.",
"title": "Career"
},
{
"paragraph_id": 33,
"text": "After President John F. Kennedy was assassinated in November 1963, his family and friends discussed how to construct a library that would serve as a fitting memorial. A committee was formed to advise Kennedy's widow Jacqueline, who would make the final decision. The group deliberated for months and considered many famous architects. Eventually, Kennedy chose Pei to design the library, based on two considerations. First, she appreciated the variety of ideas he had used for earlier projects. \"He didn't seem to have just one way to solve a problem,\" she said. \"He seemed to approach each commission thinking only of it and then develop a way to make something beautiful.\" Ultimately, however, Kennedy made her choice based on her personal connection with Pei. Calling it \"really an emotional decision\", she explained: \"He was so full of promise, like Jack; they were born in the same year. I decided it would be fun to take a great leap with him.\"",
"title": "Career"
},
{
"paragraph_id": 34,
"text": "The project was plagued with problems from the outset. The first was scope. President Kennedy had begun considering the structure of his library soon after taking office, and he wanted to include archives from his administration, a museum of personal items, and a political science institute. After the assassination, the list expanded to include a fitting memorial tribute to the slain president. The variety of necessary inclusions complicated the design process and caused significant delays.",
"title": "Career"
},
{
"paragraph_id": 35,
"text": "Pei's first proposed design included a large glass pyramid that would fill the interior with sunlight, meant to represent the optimism and hope that Kennedy's administration had symbolized for so many in the United States. Mrs. Kennedy liked the design, but resistance began in Cambridge, the first proposed site for the building, as soon as the project was announced. Many community members worried that the library would become a tourist attraction, causing particular problems with traffic congestion. Others worried that the design would clash with the architectural feel of nearby Harvard Square. By the mid-1970s, Pei tried proposing a new design, but the library's opponents resisted every effort. These events pained Pei, who had sent all three of his sons to Harvard, and although he rarely discussed his frustration, it was evident to his wife. \"I could tell how tired he was by the way he opened the door at the end of the day,\" she said. \"His footsteps were dragging. It was very hard for I. M. to see that so many people didn't want the building.\"",
"title": "Career"
},
{
"paragraph_id": 36,
"text": "Finally the project moved to Columbia Point, near the University of Massachusetts Boston. The new site was less than ideal; it was located on an old landfill, and just over a large sewage pipe. Pei's architectural team added more fill to cover the pipe and developed an elaborate ventilation system to conquer the odor. A new design was unveiled, combining a large square glass-enclosed atrium with a triangular tower and a circular walkway.",
"title": "Career"
},
{
"paragraph_id": 37,
"text": "The John F. Kennedy Presidential Library and Museum was dedicated on October 20, 1979. Critics generally liked the finished building, but the architect himself was unsatisfied. The years of conflict and compromise had changed the nature of the design, and Pei felt that the final result lacked its original passion. \"I wanted to give something very special to the memory of President Kennedy,\" he said in 2000. \"It could and should have been a great project.\" Pei's work on the Kennedy project boosted his reputation as an architect of note.",
"title": "Career"
},
{
"paragraph_id": 38,
"text": "",
"title": "Career"
},
{
"paragraph_id": 39,
"text": "The Pei Plan was a failed urban redevelopment initiative designed for downtown Oklahoma City, Oklahoma, in 1964. The plan called for the demolition of hundreds of old downtown structures in favor of renewed parking, office building, and retail developments, in addition to public projects such as the Myriad Convention Center and the Myriad Botanical Gardens. It was the dominant template for downtown development in Oklahoma City from its inception through the 1970s. The plan generated mixed results and opinion, largely succeeding in re-developing office building and parking infrastructure but failing to attract its anticipated retail and residential development. Significant public resentment also developed as a result of the destruction of multiple historic structures. As a result, Oklahoma City's leadership avoided large-scale urban planning for downtown throughout the 1980s and early 1990s, until the passage of the Metropolitan Area Projects (MAPS) initiative in 1993.",
"title": "Career"
},
{
"paragraph_id": 40,
"text": "Another city which turned to Pei for urban renewal during this time was Providence, Rhode Island. In the late 1960s, Providence hired Pei to redesign Cathedral Square, a once-bustling civic center which had become neglected and empty, as part of an ambitious larger plan to redesign downtown. Pei's new plaza, modeled after the Greek Agora marketplace, opened in 1972. The city ran out of money before Pei's vision could be fully realized. Also, recent construction of a low-income housing complex and Interstate 95 had changed the neighborhood's character permanently. In 1974, The Providence Evening Bulletin called Pei's new plaza a \"conspicuous failure\". By 2016, media reports characterized the plaza as a neglected, little-visited \"hidden gem\".",
"title": "Career"
},
{
"paragraph_id": 41,
"text": "In 1974, the city of Augusta, Georgia turned to Pei and his firm for downtown revitalization. The Chamber of Commerce building and Bicentennial Park were completed from his plan. In 1976, Pei designed a distinctive modern penthouse that was added to the roof of architect William Lee Stoddart's historic Lamar Building, designed in 1916. In 1980, Pei and his company designed the Augusta Civic Center, now known as the James Brown Arena.",
"title": "Career"
},
{
"paragraph_id": 42,
"text": "Kennedy's assassination also led indirectly to another commission for Pei's firm. In 1964 the acting mayor of Dallas, Erik Jonsson, began working to change the community's image. Dallas was known and disliked as the city where the president had been killed, but Jonsson began a program designed to initiate a community renewal. One of the goals was a new city hall, which could be a \"symbol of the people\". Jonsson, a co-founder of Texas Instruments, learned about Pei from his associate Cecil Howard Green, who had recruited the architect for MIT's Earth Sciences building.",
"title": "Career"
},
{
"paragraph_id": 43,
"text": "Pei's approach to the new Dallas City Hall mirrored those of other projects; he surveyed the surrounding area and worked to make the building fit. In the case of Dallas, he spent days meeting with residents of the city and was impressed by their civic pride. He also found that the skyscrapers of the downtown business district dominated the skyline, and sought to create a building which could face the tall buildings and represent the importance of the public sector. He spoke of creating \"a public-private dialogue with the commercial high-rises\".",
"title": "Career"
},
{
"paragraph_id": 44,
"text": "Working with his associate Theodore Musho, Pei developed a design centered on a building with a top much wider than the bottom; the facade leans at an angle of 34 degrees, which shades the building from the Texas sun. A plaza stretches out before the building, and a series of support columns holds it up. It was influenced by Le Corbusier's High Court building in Chandigarh, India; Pei sought to use the significant overhang to unify the building and plaza. The project cost much more than initially expected, and took 11 years to complete. Revenue was secured in part by including a subterranean parking garage. The interior of the city hall is large and spacious; windows in the ceiling above the eighth floor fill the main space with light.",
"title": "Career"
},
{
"paragraph_id": 45,
"text": "The city of Dallas received the building well, and a local television news crew found unanimous approval of the new city hall when it officially opened to the public in 1978. Pei himself considered the project a success, even as he worried about the arrangement of its elements. He said: \"It's perhaps stronger than I would have liked; it's got more strength than finesse.\" He felt that his relative lack of experience left him without the necessary design tools to refine his vision, but the community liked the city hall enough to invite him back. Over the years he went on to design five additional buildings in the Dallas area.",
"title": "Career"
},
{
"paragraph_id": 46,
"text": "While Pei and Musho were coordinating the Dallas project, their associate Henry Cobb had taken the helm for a commission in Boston. John Hancock Insurance chairman Robert Slater hired I. M. Pei & Partners to design a building that could overshadow the Prudential Tower, erected by their rival.",
"title": "Career"
},
{
"paragraph_id": 47,
"text": "After the firm's first plan was discarded due to a need for more office space, Cobb developed a new plan around a towering parallelogram, slanted away from the Trinity Church and accented by a wedge cut into each narrow side. To minimize the visual impact, the building was covered in large reflective glass panels; Cobb said this would make the building a \"background and foil\" to the older structures around it. When the Hancock Tower was finished in 1976, it was the tallest building in New England.",
"title": "Career"
},
{
"paragraph_id": 48,
"text": "Serious issues of execution became evident in the tower almost immediately. Many glass panels fractured in a windstorm during construction in 1973. Some detached and fell to the ground, causing no injuries but sparking concern among Boston residents. In response, the entire tower was reglazed with smaller panels. This significantly increased the cost of the project. Hancock sued the glass manufacturers, Libbey-Owens-Ford, as well as I. M. Pei & Partners, for submitting plans that were \"not good and workmanlike\". LOF countersued Hancock for defamation, accusing Pei's firm of poor use of their materials; I. M. Pei & Partners sued LOF in return. All three companies settled out of court in 1981.",
"title": "Career"
},
{
"paragraph_id": 49,
"text": "The project became an albatross for Pei's firm. Pei himself refused to discuss it for many years. The pace of new commissions slowed and the firm's architects began looking overseas for opportunities. Cobb worked in Australia and Pei took on jobs in Singapore, Iran, and Kuwait. Although it was a difficult time for everyone involved, Pei later reflected with patience on the experience. \"Going through this trial toughened us,\" he said. \"It helped to cement us as partners; we did not give up on each other.\"",
"title": "Career"
},
{
"paragraph_id": 50,
"text": "In the mid-1960s, directors of the National Gallery of Art in Washington, D.C., declared the need for a new building. Paul Mellon, a primary benefactor of the gallery and a member of its building committee, set to work with his assistant J. Carter Brown (who became gallery director in 1969) to find an architect. The new structure would be located to the east of the original building, and tasked with two functions: offer a large space for public appreciation of various popular collections; and house office space as well as archives for scholarship and research. They likened the scope of the new facility to the Library of Alexandria. After inspecting Pei's work at the Des Moines Art Center in Iowa and the Johnson Museum at Cornell University, they offered him the commission.",
"title": "Career"
},
{
"paragraph_id": 51,
"text": "Pei took to the project with vigor, and set to work with two young architects he had recently recruited to the firm, William Pedersen and Yann Weymouth. Their first obstacle was the unusual shape of the building site, a trapezoid of land at the intersection of Constitution and Pennsylvania Avenues. Inspiration struck Pei in 1968, when he scrawled a rough diagram of two triangles on a scrap of paper. The larger building would be the public gallery; the smaller would house offices and archives. This triangular shape became a singular vision for the architect. As the date for groundbreaking approached, Pedersen suggested to his boss that a slightly different approach would make construction easier. Pei simply smiled and said: \"No compromises.\"",
"title": "Career"
},
{
"paragraph_id": 52,
"text": "The growing popularity of art museums presented unique challenges to the architecture. Mellon and Pei both expected large crowds of people to visit the new building, and they planned accordingly. To this end, Pei designed a large lobby roofed with enormous skylights. Individual galleries are located along the periphery, allowing visitors to return after viewing each exhibit to the spacious main room. A large mobile sculpture by American artist Alexander Calder was later added to the lobby. Pei hoped the lobby would be exciting to the public in the same way as the central room of the Guggenheim Museum is in New York City. The modern museum, he said later, \"must pay greater attention to its educational responsibility, especially to the young\".",
"title": "Career"
},
{
"paragraph_id": 53,
"text": "Materials for the building's exterior were chosen with careful precision. To match the look and texture of the original gallery's marble walls, builders re-opened the quarry in Knoxville, Tennessee, from which the first batch of stone had been harvested. The project even found and hired Malcolm Rice, a quarry supervisor who had overseen the original 1941 gallery project. The marble was cut into three-inch-thick blocks and arranged over the concrete foundation, with darker blocks at the bottom and lighter blocks on top.",
"title": "Career"
},
{
"paragraph_id": 54,
"text": "The East Building was honored on May 30, 1978, two days before its public unveiling, with a black-tie party attended by celebrities, politicians, benefactors, and artists. When the building opened, popular opinion was enthusiastic. Large crowds visited the new museum, and critics generally voiced their approval. Ada Louise Huxtable wrote in The New York Times that Pei's building was \"a palatial statement of the creative accommodation of contemporary art and architecture\". The sharp angle of the smaller building has been a particular note of praise for the public; over the years it has become stained and worn from the hands of visitors.",
"title": "Career"
},
{
"paragraph_id": 55,
"text": "Some critics disliked the unusual design, however, and criticized the reliance on triangles throughout the building. Others took issue with the large main lobby, particularly its attempt to lure casual visitors. In his review for Artforum, critic Richard Hennessy described a \"shocking fun-house atmosphere\" and \"aura of ancient Roman patronage\". One of the earliest and most vocal critics, however, came to appreciate the new gallery once he saw it in person. Allan Greenberg had scorned the design when it was first unveiled, but wrote later to J. Carter Brown: \"I am forced to admit that you are right and I was wrong! The building is a masterpiece.\"",
"title": "Career"
},
{
"paragraph_id": 56,
"text": "After U.S. President Richard Nixon made his famous 1972 visit to China, a wave of exchanges took place between the two countries. One of these was a delegation of the American Institute of Architects in 1974, which Pei joined. It was his first trip back to China since leaving in 1935. He was favorably received, returned the welcome with positive comments, and a series of lectures ensued. Pei noted in one lecture that since the 1950s Chinese architects had been content to imitate Western styles; he urged his audience in one lecture to search China's native traditions for inspiration.",
"title": "Career"
},
{
"paragraph_id": 57,
"text": "In 1978, Pei was asked to initiate a project for his home country. After surveying a number of different locations, Pei fell in love with a valley that had once served as an imperial garden and hunting preserve known as Fragrant Hills. The site housed a decrepit hotel; Pei was invited to tear it down and build a new one. As usual, he approached the project by carefully considering the context and purpose. Likewise, he considered modernist styles inappropriate for the setting. Thus, he said, it was necessary to find \"a third way\".",
"title": "Career"
},
{
"paragraph_id": 58,
"text": "After visiting his ancestral home in Suzhou, Pei created a design based on some simple but nuanced techniques he admired in traditional residential Chinese buildings. Among these were abundant gardens, integration with nature, and consideration of the relationship between enclosure and opening. Pei's design included a large central atrium covered by glass panels that functioned much like the large central space in his East Building of the National Gallery. Openings of various shapes in walls invited guests to view the natural scenery beyond. Younger Chinese who had hoped the building would exhibit some of Cubist flavor for which Pei had become known were disappointed, but the new hotel found more favor with government officials and architects.",
"title": "Career"
},
{
"paragraph_id": 59,
"text": "The hotel, with 325 guest rooms and a four-story central atrium, was designed to fit perfectly into its natural habitat. The trees in the area were of special concern, and particular care was taken to cut down as few as possible. He worked with an expert from Suzhou to preserve and renovate a water maze from the original hotel, one of only five in the country. Pei was also meticulous about the arrangement of items in the garden behind the hotel; he even insisted on transporting 230 short tons (210 t) of rocks from a location in southwest China to suit the natural aesthetic. An associate of Pei's said later that he never saw the architect so involved in a project.",
"title": "Career"
},
{
"paragraph_id": 60,
"text": "During construction, a series of mistakes collided with the nation's lack of technology to strain relations between architects and builders. Whereas 200 or so workers might have been used for a similar building in the US, the Fragrant Hill project employed over 3,000 workers. This was mostly because the construction company lacked the sophisticated machines used in other parts of the world. The problems continued for months, until Pei had an uncharacteristically emotional moment during a meeting with Chinese officials. He later explained that his actions included \"shouting and pounding the table\" in frustration. The design staff noticed a difference in the manner of work among the crew after the meeting. As the opening neared, however, Pei found the hotel still needed work. He began scrubbing floors with his wife and ordered his children to make beds and vacuum floors. The project's difficulties took an emotional and physical strain on the Pei family.",
"title": "Career"
},
{
"paragraph_id": 61,
"text": "The Fragrant Hill Hotel opened on October 17, 1982, but quickly fell into disrepair. A member of Pei's staff returned for a visit several years later and confirmed the dilapidated condition of the hotel. He and Pei attributed this to the country's general unfamiliarity with deluxe buildings. The Chinese architectural community at the time gave the structure little attention, as their interest at the time centered on the work of American postmodernists such as Michael Graves.",
"title": "Career"
},
{
"paragraph_id": 62,
"text": "As the Fragrant Hill project neared completion, Pei began work on the Jacob K. Javits Convention Center in New York City, for which his associate James Freed served as lead designer. Hoping to create a vibrant community institution in what was then a run-down neighborhood on Manhattan's west side, Freed developed a glass-coated structure with an intricate space frame of interconnected metal rods and spheres.",
"title": "Career"
},
{
"paragraph_id": 63,
"text": "The convention center was plagued from the start by budget problems and construction blunders. City regulations forbid a general contractor having final authority over the project, so architects and program manager Richard Kahan had to coordinate the wide array of builders, plumbers, electricians, and other workers. The forged steel globes to be used in the space frame came to the site with hairline cracks and other defects: 12,000 were rejected. These and other problems led to media comparisons with the disastrous Hancock Tower. One New York City official blamed Kahan for the difficulties, indicating that the building's architectural flourishes were responsible for delays and financial crises. The Javits Center opened on April 3, 1986, to a generally positive reception. During the inauguration ceremonies, however, neither Freed nor Pei was recognized for their role in the project.",
"title": "Career"
},
{
"paragraph_id": 64,
"text": "When François Mitterrand was elected President of France in 1981, he laid out an ambitious plan for a variety of construction projects. One of these was the renovation of the Louvre. Mitterrand appointed a civil servant named Émile Biasini [fr] to oversee it. After visiting museums in Europe and the United States, including the U.S. National Gallery, he asked Pei to join the team. The architect made three secretive trips to Paris, to determine the feasibility of the project; only one museum employee knew why he was there. Pei finally agreed that a new construction project was not only possible, but necessary for the future of the museum. He thus became the first foreign architect to work on the Louvre.",
"title": "Career"
},
{
"paragraph_id": 65,
"text": "The heart of the new design included not only a renovation of the Cour Napoléon in the midst of the buildings, but also a transformation of the interiors. Pei proposed a central entrance, not unlike the lobby of the National Gallery East Building, which would link the three major wings around the central space. Below would be a complex of additional floors for research, storage, and maintenance purposes. At the center of the courtyard he designed a glass and steel pyramid, first proposed with the Kennedy Library, to serve as entrance and anteroom skylight. It was mirrored by an inverted pyramid to the west, to reflect sunlight into the complex. These designs were partly an homage to the fastidious geometry of the French landscape architect André Le Nôtre (1613–1700). Pei also found the pyramid shape best suited for stable transparency, and considered it \"most compatible with the architecture of the Louvre, especially with the faceted planes of its roofs\".",
"title": "Career"
},
{
"paragraph_id": 66,
"text": "Biasini and Mitterrand liked the plans, but the scope of the renovation displeased Louvre administrator André Chabaud. He resigned from his post, complaining that the project was \"unfeasible\" and posed \"architectural risks\". Some sections of the French public also reacted harshly to the design, mostly because of the proposed pyramid. One critic called it a \"gigantic, ruinous gadget\"; another charged Mitterrand with \"despotism\" for inflicting Paris with the \"atrocity\". Pei estimated that 90 percent of Parisians opposed his design. \"I received many angry glances in the streets of Paris,\" he said. Some condemnations carried nationalistic overtones. One opponent wrote: \"I am surprised that one would go looking for a Chinese architect in America to deal with the historic heart of the capital of France.\"",
"title": "Career"
},
{
"paragraph_id": 67,
"text": "Soon, however, Pei and his team won the support of several key cultural icons, including the conductor Pierre Boulez and Claude Pompidou, widow of former French President Georges Pompidou, after whom the similarly controversial Centre Georges Pompidou was named. In an attempt to soothe public ire, Pei took a suggestion from then-mayor of Paris Jacques Chirac and placed a full-sized cable model of the pyramid in the courtyard. During the four days of its exhibition, an estimated 60,000 people visited the site. Some critics eased their opposition after witnessing the proposed scale of the pyramid.",
"title": "Career"
},
{
"paragraph_id": 68,
"text": "Pei demanded a method of glass production that resulted in clear panes. The pyramid was constructed at the same time as the subterranean levels below, which caused difficulties during the building stages. As they worked, construction teams came upon an abandoned set of rooms containing 25,000 historical items; these were incorporated into the rest of the structure to add a new exhibition zone.",
"title": "Career"
},
{
"paragraph_id": 69,
"text": "The new Cour Napoléon was opened to the public on October 14, 1988, and the Pyramid entrance was opened the following March. By this time, public opposition had softened; a poll found a 56 percent approval rating for the pyramid, with 23 percent still opposed. The newspaper Le Figaro had vehemently criticized Pei's design, but later celebrated the tenth anniversary of its magazine supplement at the pyramid. Prince Charles of Britain surveyed the new site with curiosity, and declared it \"marvelous, very exciting\". A writer in Le Quotidien de Paris wrote: \"The much-feared pyramid has become adorable.\"",
"title": "Career"
},
{
"paragraph_id": 70,
"text": "The experience was exhausting for Pei, but also rewarding. \"After the Louvre,\" he said later, \"I thought no project would be too difficult.\" The pyramid achieved further widespread international recognition for its central role in the plot at the denouement of The Da Vinci Code by Dan Brown and its appearance in the final scene of the subsequent screen adaptation. The Louvre Pyramid became Pei's most famous structure.",
"title": "Career"
},
{
"paragraph_id": 71,
"text": "The opening of the Louvre Pyramid coincided with four other projects on which Pei had been working, prompting architecture critic Paul Goldberger to declare 1989 \"the year of Pei\" in The New York Times. It was also the year in which Pei's firm changed its name to Pei Cobb Freed & Partners, to reflect the increasing stature and prominence of his associates. At the age of 72, Pei had begun thinking about retirement, but continued working long hours to see his designs come to light.",
"title": "Career"
},
{
"paragraph_id": 72,
"text": "One of the projects took Pei back to Dallas, Texas, to design the Morton H. Meyerson Symphony Center. The success of city's performing artists, particularly the Dallas Symphony Orchestra then led by conductor Eduardo Mata, led to interest by city leaders in creating a modern center for musical arts that could rival the best halls in Europe. The organizing committee contacted 45 architects, but at first Pei did not respond, thinking that his work on the Dallas City Hall had left a negative impression. One of his colleagues from that project, however, insisted that he meet with the committee. He did and, although it would be his first concert hall, the committee voted unanimously to offer him the commission. As one member put it: \"We were convinced that we would get the world's greatest architect putting his best foot forward.\"",
"title": "Career"
},
{
"paragraph_id": 73,
"text": "The project presented a variety of specific challenges. Because its main purpose was the presentation of live music, the hall needed a design focused on acoustics first, then public access and exterior aesthetics. To this end, a professional sound technician was hired to design the interior. He proposed a shoebox auditorium, used in the acclaimed designs of top European symphony halls such as the Amsterdam Concertgebouw and Vienna Musikverein. Pei drew inspiration for his adjustments from the designs of the German architect Johann Balthasar Neumann, especially the Basilica of the Fourteen Holy Helpers. He also sought to incorporate some of the panache of the Paris Opéra designed by Charles Garnier.",
"title": "Career"
},
{
"paragraph_id": 74,
"text": "Pei's design placed the rigid shoebox at an angle to the surrounding street grid, connected at the north end to a long rectangular office building, and cut through the middle with an assortment of circles and cones. The design attempted to reproduce with modern features the acoustic and visual functions of traditional elements like filigree. The project was risky: its goals were ambitious and any unforeseen acoustic flaws would be virtually impossible to remedy after the hall's completion. Pei admitted that he did not completely know how everything would come together. \"I can imagine only 60 percent of the space in this building,\" he said during the early stages. \"The rest will be as surprising to me as to everyone else.\" As the project developed, costs rose steadily and some sponsors considered withdrawing their support. Billionaire tycoon Ross Perot made a donation of US$10 million, on the condition that it be named in honor of Morton H. Meyerson, the longtime patron of the arts in Dallas.",
"title": "Career"
},
{
"paragraph_id": 75,
"text": "The building opened and immediately garnered widespread praise, especially for its acoustics. After attending a week of performances in the hall, a music critic for The New York Times wrote an enthusiastic account of the experience and congratulated the architects. One of Pei's associates told him during a party before the opening that the symphony hall was \"a very mature building\"; he smiled and replied: \"Ah, but did I have to wait this long?\"",
"title": "Career"
},
{
"paragraph_id": 76,
"text": "A new offer had arrived for Pei from the Chinese government in 1982. With an eye toward the transfer of sovereignty over Hong Kong from the British in 1997, authorities in China sought Pei's aid on a new tower for the local branch of the Bank of China. The Chinese government was preparing for a new wave of engagement with the outside world and sought a tower to represent modernity and economic strength. Given the elder Pei's history with the bank before the Communist takeover, government officials visited the 89-year-old man in New York to gain approval for his son's involvement. Pei then spoke with his father at length about the proposal. Although the architect remained pained by his experience with Fragrant Hills, he agreed to accept the commission.",
"title": "Career"
},
{
"paragraph_id": 77,
"text": "The proposed site in Hong Kong's Central District was less than ideal; a tangle of highways lined it on three sides. The area had also been home to a headquarters for Japanese military police during World War II, and was notorious for prisoner torture. The small parcel of land made a tall tower necessary, and Pei had usually shied away from such projects; in Hong Kong especially, the skyscrapers lacked any real architectural character. Lacking inspiration and unsure of how to approach the building, Pei took a weekend vacation to the family home in Katonah, New York. There he found himself experimenting with a bundle of sticks until he happened upon a cascading sequence.",
"title": "Career"
},
{
"paragraph_id": 78,
"text": "Pei felt that his design for the Bank of China Tower needed to reflect \"the aspirations of the Chinese people\". The design that he developed for the skyscraper was not only unique in appearance, but also sound enough to pass the city's rigorous standards for wind-resistance. The building is composed of four triangular shafts rising up from a square base, supported by a visible truss structure that distributes stress to the four corners of the base. Using the reflective glass that had become something of a trademark for him, Pei organized the facade around diagonal bracing in a union of structure and form that reiterates the triangle motif established in the plan. At the top, he designed the roofs at sloping angles to match the rising aesthetic of the building. Some influential advocates of feng shui in Hong Kong and China criticized the design, and Pei and government officials responded with token adjustments.",
"title": "Career"
},
{
"paragraph_id": 79,
"text": "As the tower neared completion, Pei was shocked to witness the government's massacre of unarmed civilians at the Tiananmen Square protests of 1989. He wrote an opinion piece for The New York Times titled \"China Won't Ever Be the Same\", in which he said that the killings \"tore the heart out of a generation that carries the hope for the future of the country\". The massacre deeply disturbed his entire family, and he wrote that \"China is besmirched.\"",
"title": "Career"
},
{
"paragraph_id": 80,
"text": "As the 1990s began, Pei transitioned into a role of decreased involvement with his firm. The staff had begun to shrink, and Pei wanted to dedicate himself to smaller projects allowing for more creativity. Before he made this change, however, he set to work on his last major project as active partner: the Rock and Roll Hall of Fame in Cleveland, Ohio. Considering his work on such bastions of high culture as the Louvre and U.S. National Gallery, some critics were surprised by his association with what many considered a tribute to low culture. The sponsors of the hall, however, sought Pei for specifically this reason; they wanted the building to have an aura of respectability from the beginning. As in the past, Pei accepted the commission in part because of the unique challenge it presented.",
"title": "Career"
},
{
"paragraph_id": 81,
"text": "Using a glass wall for the entrance, similar in appearance to his Louvre pyramid, Pei coated the exterior of the main building in white metal, and placed a large cylinder on a narrow perch to serve as a performance space. The combination of off-centered wraparounds and angled walls was, Pei said, designed to provide \"a sense of tumultuous youthful energy, rebelling, flailing about\".",
"title": "Career"
},
{
"paragraph_id": 82,
"text": "The building opened in 1995, and was received with moderate praise. The New York Times called it \"a fine building\", but Pei was among those who felt disappointed with the results. The museum's early beginnings in New York combined with an unclear mission created a fuzzy understanding among project leaders for precisely what was needed. Although the city of Cleveland benefited greatly from the new tourist attraction, Pei was unhappy with it.",
"title": "Career"
},
{
"paragraph_id": 83,
"text": "At the same time, Pei designed a new museum for Luxembourg, the Musée d'art moderne Grand-Duc Jean, commonly known as the Mudam. Drawing from the original shape of the Fort Thüngen walls where the museum was located, Pei planned to remove a portion of the original foundation. Public resistance to the historical loss forced a revision of his plan, however, and the project was nearly abandoned. The size of the building was halved, and it was set back from the original wall segments to preserve the foundation. Pei was disappointed with the alterations, but remained involved in the building process even during construction.",
"title": "Career"
},
{
"paragraph_id": 84,
"text": "In 1995, Pei was hired to design an extension to the Deutsches Historisches Museum, or German Historical Museum in Berlin. Returning to the challenge of the East Building of the U.S. National Gallery, Pei worked to combine a modernist approach with a classical main structure. He described the glass cylinder addition as a \"beacon\", and topped it with a glass roof to allow plentiful sunlight inside. Pei had difficulty working with German government officials on the project; their utilitarian approach clashed with his passion for aesthetics. \"They thought I was nothing but trouble\", he said.",
"title": "Career"
},
{
"paragraph_id": 85,
"text": "Pei also worked at this time on two projects for a new Japanese religious movement called Shinji Shumeikai. He was approached by the movement's spiritual leader, Kaishu Koyama, who impressed the architect with her sincerity and willingness to give him significant artistic freedom. One of the buildings was a bell tower, designed to resemble the bachi used when playing traditional instruments like the shamisen. Pei was unfamiliar with the movement's beliefs, but explored them in order to represent something meaningful in the tower. As he said: \"It was a search for the sort of expression that is not at all technical.\"",
"title": "Career"
},
{
"paragraph_id": 86,
"text": "The experience was rewarding for Pei, and he agreed immediately to work with the group again. The new project was the Miho Museum, to display Koyama's collection of tea ceremony artifacts. Pei visited the site in Shiga Prefecture, and during their conversations convinced Koyama to expand her collection. She conducted a global search and acquired more than 300 items showcasing the history of the Silk Road.",
"title": "Career"
},
{
"paragraph_id": 87,
"text": "One major challenge was the approach to the museum. The Japanese team proposed a winding road up the mountain, not unlike the approach to the NCAR building in Colorado. Instead, Pei ordered a hole cut through a nearby mountain, connected to a major road via a bridge suspended from ninety-six steel cables and supported by a post set into the mountain. The museum itself was built into the mountain, with 80 percent of the building underground.",
"title": "Career"
},
{
"paragraph_id": 88,
"text": "When designing the exterior, Pei borrowed from the tradition of Japanese temples, particularly those found in nearby Kyoto. He created a concise spaceframe wrapped into French limestone and covered with a glass roof. Pei also oversaw specific decorative details, including a bench in the entrance lobby, carved from a 350-year-old keyaki tree. Because of Koyama's considerable wealth, money was rarely considered an obstacle; estimates at the time of completion put the cost of the project at US$350 million.",
"title": "Career"
},
{
"paragraph_id": 89,
"text": "During the first decade of the 2000s, Pei designed a variety of buildings, including the Suzhou Museum near his childhood home. He also designed the Museum of Islamic Art in Doha, Qatar, at the request of the Al-Thani Family. Although it was originally planned for the corniche road along Doha Bay, Pei convinced the project coordinators to build a new island to provide the needed space. He then spent six months touring the region and surveying mosques in Spain, Syria, and Tunisia. He was especially impressed with the elegant simplicity of the Mosque of Ibn Tulun in Cairo.",
"title": "Career"
},
{
"paragraph_id": 90,
"text": "Once again, Pei sought to combine new design elements with the classical aesthetic most appropriate for the location of the building. The sand-colored rectangular boxes rotate evenly to create a subtle movement, with small arched windows at regular intervals into the limestone exterior. Inside, galleries are arranged around a massive atrium, lit from above. The museum's coordinators were pleased with the project; its official website describes its \"true splendour unveiled in the sunlight,\" and speaks of \"the shades of colour and the interplay of shadows paying tribute to the essence of Islamic architecture\".",
"title": "Career"
},
{
"paragraph_id": 91,
"text": "The Macao Science Center in Macau was designed by Pei Partnership Architects in association with I. M. Pei. The project to build the science center was conceived in 2001 and construction started in 2006. The center was completed in 2009 and opened by the Chinese President Hu Jintao. The main part of the building is a distinctive conical shape with a spiral walkway and large atrium inside, similar to that of the Solomon R. Guggenheim Museum in New York City. Galleries lead off the walkway, mainly consisting of interactive exhibits aimed at science education. The building is in a prominent position by the sea and is now a Macau landmark.",
"title": "Career"
},
{
"paragraph_id": 92,
"text": "Pei's career ended with his death in May 2019, at 102 years of age.",
"title": "Career"
},
{
"paragraph_id": 93,
"text": "Pei's style was described as thoroughly modernist, with significant cubist themes. He was known for combining traditional architectural principles with progressive designs based on simple geometric patterns—circles, squares, and triangles are common elements of his work in both plan and elevation. As one critic wrote: \"Pei has been aptly described as combining a classical sense of form with a contemporary mastery of method.\" In 2000, biographer Carter Wiseman called Pei \"the most distinguished member of his Late-Modernist generation still in practice\". At the same time, Pei himself rejected simple dichotomies of architectural trends. He once said: \"The talk about modernism versus post-modernism is unimportant. It's a side issue. An individual building, the style in which it is going to be designed and built, is not that important. The important thing, really, is the community. How does it affect life?\"",
"title": "Style and method"
},
{
"paragraph_id": 94,
"text": "Pei's work is celebrated throughout the world of architecture. His colleague John Portman once told him: \"Just once, I'd like to do something like the East Building.\" But this originality did not always bring large financial reward; as Pei replied to the successful architect: \"Just once, I'd like to make the kind of money you do.\" His concepts, moreover, were too individualized and dependent on context to have given rise to a particular school of design. Pei referred to his own \"analytical approach\" when explaining the lack of a \"Pei School\".",
"title": "Style and method"
},
{
"paragraph_id": 95,
"text": "\"For me,\" he said, \"the important distinction is between a stylistic approach to the design; and an analytical approach giving the process of due consideration to time, place, and purpose ... My analytical approach requires a full understanding of the three essential elements ... to arrive at an ideal balance among them.\"",
"title": "Style and method"
},
{
"paragraph_id": 96,
"text": "In the words of his biographer, Pei won \"every award of any consequence in his art\", including the Arnold Brunner Award from the National Institute of Arts and Letters (1963), the Gold Medal for Architecture from the American Academy of Arts and Letters (1979), the AIA Gold Medal (1979), the first Praemium Imperiale for Architecture from the Japan Art Association (1989), the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, the 1998 Edward MacDowell Medal in the Arts, and the 2010 Royal Gold Medal from the Royal Institute of British Architects. In 1983 he was awarded the Pritzker Prize, sometimes referred to as the Nobel Prize of architecture. In its citation, the jury said: \"Ieoh Ming Pei has given this century some of its most beautiful interior spaces and exterior forms ... His versatility and skill in the use of materials approach the level of poetry.\" The prize was accompanied by a US$100,000 award, which Pei used to create a scholarship for Chinese students to study architecture in the U.S., on the condition that they return to China to work. In 1986, he was one of twelve recipients of the Medal of Liberty. When he was awarded the 2003 Henry C. Turner Prize by the National Building Museum, museum board chair Carolyn Brody praised his impact on construction innovation: \"His magnificent designs have challenged engineers to devise innovative structural solutions, and his exacting expectations for construction quality have encouraged contractors to achieve high standards.\" In December 1992, Pei was awarded the Presidential Medal of Freedom by President George H. W. Bush. In 1996, Pei became the first person to be elected a foreign member of the Chinese Academy of Engineering. Pei was also an elected member of the American Academy of Arts and Sciences and the American Philosophical Society.",
"title": "Awards and honors"
},
{
"paragraph_id": 97,
"text": "Pei's wife of over 70 years, Eileen Loo, died on June 20, 2014. Together they had three sons, T'ing Chung (1945–2003), Chien Chung ( 1946-2023; known as Didi), and Li Chung (b. 1949; known as Sandi); and a daughter, Liane (b. 1960). T'ing Chung was an urban planner and alumnus of his father's alma mater MIT and Harvard. Chieng Chung and Li Chung, who are both Harvard College and Harvard Graduate School of Design alumni, founded and run Pei Partnership Architects. Liane is a lawyer.",
"title": "Personal life"
},
{
"paragraph_id": 98,
"text": "In 2015, Pei's home health aide, Eter Nikolaishvili, grabbed Pei's right forearm and twisted it, resulting in bruising and bleeding and hospital treatment. Pei alleges that the assault occurred when Pei threatened to call the police about Nikolaishvili. Nikolaishvili agreed to plead guilty in 2016.",
"title": "Personal life"
},
{
"paragraph_id": 99,
"text": "Pei celebrated his 100th birthday on April 26, 2017. He died at his Manhattan apartment on May 16, 2019, at the age of 102. He was survived by his three remaining adult children as well as seven grandchildren, and five great-grandchildren.",
"title": "Personal life"
},
{
"paragraph_id": 100,
"text": "In the 2021 parody film America: The Motion Picture, I. M. Pei was voiced by David Callaham.",
"title": "In popular culture"
}
]
| Ieoh Ming Pei was a Chinese-American architect. Raised in Shanghai, Pei drew inspiration at an early age from the garden villas at Suzhou, the traditional retreat of the scholar-gentry to which his family belonged. In 1935, he moved to the United States and enrolled in the University of Pennsylvania's architecture school, but he quickly transferred to the Massachusetts Institute of Technology. He was unhappy with the focus on Beaux-Arts architecture at both schools, and spent his free time researching emerging architects, especially Le Corbusier. After graduating, he joined the Harvard Graduate School of Design (GSD) and became a friend of the Bauhaus architects Walter Gropius and Marcel Breuer. In 1948, Pei was recruited by New York City real estate magnate William Zeckendorf, for whom he worked for seven years before establishing an independent design firm, I. M. Pei & Associates, in 1955. In 1966, that became I. M. Pei & Partners, and became Pei Cobb Freed & Partners in 1989. Pei retired from full-time practice in 1990. In his retirement, he worked as an architectural consultant primarily from his sons' architectural firm Pei Partnership Architects. Pei's first major recognition came with the Mesa Laboratory at the National Center for Atmospheric Research in Colorado. His new stature led to his selection as chief architect for the John F. Kennedy Library in Massachusetts. He went on to design Dallas City Hall and the East Building of the National Gallery of Art. He returned to China for the first time in 1975 to design a hotel at Fragrant Hills and, fifteen years later, designed Bank of China Tower, Hong Kong, a skyscraper in Hong Kong for the Bank of China. In the early 1980s, Pei was the focus of controversy when he designed a glass-and-steel pyramid for the Louvre in Paris. He later returned to the world of the arts by designing the Morton H. Meyerson Symphony Center in Dallas, the Miho Museum in Japan, Shigaraki, near Kyoto, and the chapel of the junior and high school: MIHO Institute of Aesthetics, the Suzhou Museum in Suzhou, Museum of Islamic Art in Qatar, and the Grand Duke Jean Museum of Modern Art, abbreviated to Mudam, in Luxembourg. Pei won a wide variety of prizes and awards in the field of architecture, including the AIA Gold Medal in 1979, the first Praemium Imperiale for Architecture in 1989, and the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, in 2003. In 1983, he won the Pritzker Prize, which is sometimes referred to as the Nobel Prize of architecture. | 2001-11-05T06:45:29Z | 2023-12-28T04:47:52Z | [
"Template:ISBN",
"Template:Portal bar",
"Template:Authority control",
"Template:Use American English",
"Template:Family name hatnote",
"Template:Wikiquote",
"Template:Commons category",
"Template:Pritzker Prize laureates",
"Template:Main",
"Template:Convert",
"Template:Cite book",
"Template:Structurae person",
"Template:I. M. Pei",
"Template:Infobox architect",
"Template:National Medal of Arts recipients 1980s",
"Template:Use mdy dates",
"Template:Post-nominals",
"Template:Respell",
"Template:Zh",
"Template:Cite news",
"Template:Cite magazine",
"Template:Louvre",
"Template:Featured article",
"Template:Ill",
"Template:Reflist",
"Template:Webarchive",
"Template:Cite web",
"Template:Cite thesis",
"Template:Short description",
"Template:IPAc-en",
"Template:Nowrap",
"Template:Anchor"
]
| https://en.wikipedia.org/wiki/I._M._Pei |
15,156 | ICD (disambiguation) | ICD is the International Statistical Classification of Diseases and Related Health Problems, an international standard diagnostic tool.
ICD may also refer to: | [
{
"paragraph_id": 0,
"text": "ICD is the International Statistical Classification of Diseases and Related Health Problems, an international standard diagnostic tool.",
"title": ""
},
{
"paragraph_id": 1,
"text": "ICD may also refer to:",
"title": ""
}
]
| ICD is the International Statistical Classification of Diseases and Related Health Problems, an international standard diagnostic tool. ICD may also refer to: | 2002-02-25T15:43:11Z | 2023-11-04T09:21:26Z | [
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/ICD_(disambiguation) |
15,158 | Islamic Jihad | Islamic Jihad may refer to: | [
{
"paragraph_id": 0,
"text": "Islamic Jihad may refer to:",
"title": ""
}
]
| Islamic Jihad may refer to: Jihad, the Islamic theological concept, literally meaning "struggle"
Islamic Jihad Organization, defunct group active in Lebanon 1983-1992, precursor to Hezbollah
Islamic Jihad Union, al-Qaeda affiliate active in Afghanistan and Pakistan since 2005
Islamic Jihad of Yemen, defunct al-Qaeda affiliate active in Yemen 2008
Egyptian Islamic Jihad, al-Qaeda affiliate active in Egypt since the late 1970s
Palestinian Islamic Jihad, group active in Gaza since 1981
Turkish Islamic Jihad, group active in Turkey 1991-1996, now probably defunct | 2002-02-25T15:51:15Z | 2023-10-19T15:39:44Z | [
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Islamic_Jihad |
15,161 | I486 | The Intel 486, officially named i486 and also known as 80486, is a microprocessor. It is a higher-performance follow-up to the Intel 386. The i486 was introduced in 1989. It represents the fourth generation of binary compatible CPUs following the 8086 of 1978, the Intel 80286 of 1982, and 1985's i386.
It was the first tightly-pipelined x86 design as well as the first x86 chip to include more than one million transistors. It offered a large on-chip cache and an integrated floating-point unit.
When it was announced, the initial performance was originally published between 15 and 20 VAX MIPS, between 37,000 and 49,000 dhrystones per second, and between 6.1 and 8.2 double-precision megawhetstones per second for both 25 and 33 MHz version. A typical 50 MHz i486 executes around 40 million instructions per second (MIPS), reaching 50 MIPS peak performance. It is approximately twice as fast as the i386 or i286 per clock cycle. The i486's improved performance is thanks to its five-stage pipeline with all stages bound to a single cycle. The enhanced FPU unit on the chip was significantly faster than the i387 FPU per cycle. The intel 80387 FPU ("i387") was a separate, optional math coprocessor that was installed in a motherboard socket alongside the i386.
The i486 was succeeded by the original Pentium.
The concept of this microprocessor generation was discussed with Pat Gelsinger and John Crawford shortly after the release of 386 processor in 1985. The team started the computer simulation in early 1987. They finalized the logic and microcode function during 1988. The team finalized the database in February 1989 until the tape out on March 1st. They received the first silicon from the fabrication on March 20th.
The i486 was announced at Spring Comdex in April 10, 1989. At the announcement, Intel stated that samples would be available in the third quarter and production quantities would ship in the fourth quarter. The first i486-based PCs were announced in late 1989.
The first major update to the i486 design came in March 1992 with the release of the clock-doubled 486DX2 series. It was the first time that the CPU core clock frequency was separated from the system bus clock frequency by using a dual clock multiplier, supporting 486DX2 chips at 40 and 50 MHz. The faster 66 MHz 486DX2-66 was released that August.
The fifth-generation Pentium processor launched in 1993, while Intel continued to produce i486 processors, including the triple-clock-rate 486DX4-100 with a 100 MHz clock speed and a L1 cache doubled to 16 KB.
Earlier, Intel had decided not to share its 80386 and 80486 technologies with AMD. However, AMD believed that their technology sharing agreement extended to the 80386 as a derivative of the 80286. AMD reverse-engineered the 386 and produced the 40 MHz Am386DX-40 chip, which was cheaper and had lower power consumption than Intel's best 33 MHz version. Intel attempted to prevent AMD from selling the processor, but AMD won in court, which allowed it to establish itself as a competitor.
AMD continued to create clones, releasing the first-generation Am486 chip in April 1993 with clock frequencies of 25, 33 and 40 MHz. Second-generation Am486DX2 chips with 50, 66 and 80 MHz clock frequencies were released the following year. The Am486 series was completed with a 120 MHz DX4 chip in 1995.
AMD's long-running 1987 arbitration lawsuit against Intel was settled in 1995, and AMD gained access to Intel's 80486 microcode. This led to the creation of two versions of AMD's 486 processor - one reverse-engineered from Intel's microcode, while the other used AMD's microcode in a clean room design process. However, the settlement also concluded that the 80486 would be AMD's last Intel clone.
Another 486 clone manufacturer was Cyrix, which was a fabless co-processor chip maker for 80286/386 systems. The first Cyrix 486 processors, the 486SLC and 486DLC, were released in 1992 and used the 80386 package. Both Texas Instruments-manufactured Cyrix processors were pin-compatible with 386SX/DX systems, which allowed them to become an upgrade option. However, these chips could not match the Intel 486 processors, having only 1 KB of cache memory and no built-in math coprocessor. In 1993, Cyrix released its own Cx486DX and DX2 processors, which were closer in performance to Intel's counterparts. Intel and Cyrix sued each other, with Intel filing for patent infringement and Cyrix for antitrust claims. In 1994, Cyrix won the patent infringement case and dropped its antitrust claim.
In 1995, both Cyrix and AMD began looking at a ready market for users wanting to upgrade their processors. Cyrix released a derivative 486 processor called the 5x86, based on the Cyrix M1 core, which was clocked up to 120 MHz and was an option for 486 Socket 3 motherboards. AMD released a 133 MHz Am5x86 upgrade chip, which was essentially an improved 80486 with double the cache and a quad multiplier that also worked with the original 486DX motherboards. Am5x86 was the first processor to use AMD's performance rating and was marketed as Am5x86-P75, with claims that it was equivalent to the Pentium 75. Kingston Technology launched a 'TurboChip' 486 system upgrade that used a 133 MHz Am5x86.
Intel responded by making a Pentium OverDrive upgrade chip for 486 motherboards, which was a modified Pentium core that ran up to 83 MHz on boards with a 25 or 33 MHz front-side bus clock. OverDrive wasn't popular due to speed and price. The 486 was declared obsolete as early as 1996, with a Florida school district's purchase of a fleet of 486DX4 machines in that year sparking controversy. New computers equipped with 486 processors in discount warehouses became scarce, and an IBM spokesperson called it a "dinosaur". Even after the Pentium series of processors gained a foothold in the market, however, Intel continued to produce 486 cores for industrial embedded applications. Intel discontinued production of i486 processors in late 2007.
The instruction set of the i486 is very similar to the i386, with the addition of a few extra instructions, such as CMPXCHG, a compare-and-swap atomic operation, and XADD, a fetch-and-add atomic operation that returned the original value (unlike a standard ADD, which returns flags only).
The i486's performance architecture is a vast improvement over the i386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit (FPU) and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions (such as ALU reg,reg and ALU reg,im) could sustain single-clock-cycle throughput (one instruction completed every clock). In other words, it was running about 1.8 clocks per instruction. These improvements yielded a rough doubling in integer ALU performance over the i386 at the same clock rate. A 16 MHz i486 therefore had performance similar to a 33 MHz i386. The older design had to reach 50 MHz to be comparable with a 25 MHz i486 part.
Just as in the i386, a flat 4 GB memory model could be implemented. All "segment selector" registers could be set to a neutral value in protected mode, or to zero in real mode, and using only the 32-bit "offset registers" (x86-terminology for general CPU registers used as address registers) as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were then normally mapped onto physical addresses by the paging system except when it was disabled. (Real mode had no virtual addresses.) Just as with the i386, circumventing memory segmentation could substantially improve performance for some operating systems and applications.
On a typical PC motherboard, either four matched 30-pin (8-bit) SIMMs or one 72-pin (32-bit) SIMM per bank were required to fit the i486's 32-bit data bus. The address bus used 30-bits (A31..A2) complemented by four byte-select pins (instead of A0,A1) to allow for any 8/16/32-bit selection. This meant that the limit of directly addressable physical memory was 4 gigabytes as well (2 32-bit words = 2 8-bit words).
Intel offered several suffixes and variants (see table). Variants include:
The maximal internal clock frequency (on Intel's versions) ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers.
One of the few i486 models specified for a 50 MHz bus (486DX-50) initially had overheating problems and was moved to the 0.8-micrometer fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it unpopular with mainstream consumers. Local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems. The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed (50 MHz), was nevertheless slower because the external bus ran at only 25 MHz. The i486DX2 at 66 MHz (with 33 MHz external bus) was faster than the 486DX-50, overall.
More powerful i486 iterations such as the OverDrive and DX4 were less popular (the latter available as an OEM part only), as they came out after Intel had released the next-generation Pentium processor family. Certain steppings of the DX4 also officially supported 50 MHz bus operation, but it was a seldom-used feature.
*WT = write-through cache strategy, WB = write-back cache strategy
Processors compatible with the i486 were produced by companies such as IBM, Texas Instruments, AMD, Cyrix, UMC, and STMicroelectronics (formerly SGS-Thomson). Some were clones (identical at the microarchitectural level), others were clean room implementations of the Intel instruction set. (IBM's multiple-source requirement was one of the reasons behind its x86 manufacturing since the 80286.) The i486 was, however, covered by many Intel patents, including from the prior i386. Intel and IBM had broad cross-licenses of these patents, and AMD was granted rights to the relevant patents in the 1995 settlement of a lawsuit between the companies.
AMD produced several clones using a 40 MHz bus (486DX-40, 486DX/2-80, and 486DX/4-120) which had no Intel equivalent, as well as a part specified for 90 MHz, using a 30 MHz external clock, that was sold only to OEMs. The fastest running i486-compatible CPU, the Am5x86, ran at 133 MHz and was released by AMD in 1995. 150 MHz and 160 MHz parts were planned but never officially released.
Cyrix made a variety of i486-compatible processors, positioned at the cost-sensitive desktop and low-power (laptop) markets. Unlike AMD's 486 clones, the Cyrix processors were the result of clean-room reverse engineering. Cyrix's early offerings included the 486DLC and 486SLC, two hybrid chips that plugged into 386DX or SX sockets respectively, and offered 1 KB of cache (versus 8 KB for the then-current Intel/AMD parts). Cyrix also made "real" 486 processors, which plugged into the i486's socket and offered 2 or 8 KB of cache. Clock-for-clock, the Cyrix-made chips were generally slower than their Intel/AMD equivalents, though later products with 8 KB caches were more competitive, albeit late to market.
The Motorola 68040, while not i486 compatible, was often positioned as its equivalent in features and performance. Clock-for-clock basis the Motorola 68040 could significantly outperform the Intel chip. However, the i486 had the ability to be clocked significantly faster without overheating. Motorola 68040 performance lagged behind the later production i486 systems.
Early i486-based computers were equipped with several ISA slots (using an emulated PC/AT-bus) and sometimes one or two 8-bit-only slots (compatible with the PC/XT-bus). Many motherboards enabled overclocking of these from the default 6 or 8 MHz to perhaps 16.7 or 20 MHz (half the i486 bus clock) in several steps, often from within the BIOS setup. Especially older peripheral cards normally worked well at such speeds as they often used standard MSI chips instead of slower (at the time) custom VLSI designs. This could give significant performance gains (such as for old video cards moved from a 386 or 286 computer, for example). However, operation beyond 8 or 10 MHz could sometimes lead to stability problems, at least in systems equipped with SCSI or sound cards.
Some motherboards came equipped with a 32-bit EISA bus that was backward compatible with the ISA-standard. EISA offered attractive features such as increased bandwidth, extended addressing, IRQ sharing, and card configuration through software (rather than through jumpers, DIP switches, etc.) However, EISA cards were expensive and therefore mostly employed in servers and workstations. Consumer desktops often used the simpler, faster VESA Local Bus (VLB). Unfortunately prone to electrical and timing-based instability; typical consumer desktops had ISA slots combined with a single VLB slot for a video card. VLB was gradually replaced by PCI during the final years of the i486 period. Few Pentium class motherboards had VLB support as VLB was based directly on the i486 bus; much different from the P5 Pentium-bus. ISA persisted through the P5 Pentium generation and was not completely displaced by PCI until the Pentium III era, although ISA persisted well into the Pentium 4 era, especially among industrial PCs.
Late i486 boards were normally equipped with both PCI and ISA slots, and sometimes a single VLB slot. In this configuration, VLB or PCI throughput suffered depending on how buses were bridged. Initially, the VLB slot in these systems was usually fully compatible only with video cards (fitting as "VESA" stands for Video Electronics Standards Association); VLB-IDE, multi I/O, or SCSI cards could have problems on motherboards with PCI slots. The VL-Bus operated at the same clock speed as the i486-bus (basically a local bus) while the PCI bus also usually depended on the i486 clock but sometimes had a divider setting available via the BIOS. This could be set to 1/1 or 1/2, sometimes even 2/3 (for 50 MHz CPU clocks). Some motherboards limited the PCI clock to the specified maximum of 33 MHz and certain network cards depended on this frequency for correct bit-rates. The ISA clock was typically generated by a divider of the CPU/VLB/PCI clock.
One of the earliest complete systems to use the i486 chip was the Apricot VX FT, produced by British hardware manufacturer Apricot Computers. Even overseas in the United States it was popularized as "The World's First 486".
Later i486 boards supported Plug-And-Play, a specification designed by Microsoft that began as a part of Windows 95 to make component installation easier for consumers.
The AMD Am5x86 and Cyrix Cx5x86 were the last i486 processors often used in late-generation i486 motherboards. They came with PCI slots and 72-pin SIMMs that were designed to run Windows 95, and also used for 80486 motherboards upgrades. While the Cyrix Cx5x86 faded when the Cyrix 6x86 took over, the AMD Am5x86 remained important given AMD K5 delays.
Computers based on the i486 remained popular through the late 1990s, serving as low-end processors for entry-level PCs. Production for traditional desktop and laptop systems ceased in 1998, when Intel introduced the Celeron brand, though it continued to be produced for embedded systems through the late 2000s.
In the general-purpose desktop computer role, i486-based machines remained in use into the early 2000s, especially as Windows 95 through 98 and Windows NT 4.0 were the last Microsoft operating systems to officially support i486-based systems. Windows 2000 could run on a i486-based machine, although with a less than optimal performance, due to the minimum hardware requirement of a Pentium processor. However, as they were overtaken by newer operating systems, i486 systems fell out of use except for backward compatibility with older programs (most notably games), especially given problems running on newer operating systems. However, DOSBox was available for later operating systems and provides emulation of the i486 instruction set, as well as full compatibility with most DOS-based programs.
The i486 was eventually overtaken by the Pentium for personal computer applications, although Intel continued production for use in embedded systems. In May 2006, Intel announced that production of the i486 would stop at the end of September 2007. | [
{
"paragraph_id": 0,
"text": "The Intel 486, officially named i486 and also known as 80486, is a microprocessor. It is a higher-performance follow-up to the Intel 386. The i486 was introduced in 1989. It represents the fourth generation of binary compatible CPUs following the 8086 of 1978, the Intel 80286 of 1982, and 1985's i386.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It was the first tightly-pipelined x86 design as well as the first x86 chip to include more than one million transistors. It offered a large on-chip cache and an integrated floating-point unit.",
"title": ""
},
{
"paragraph_id": 2,
"text": "When it was announced, the initial performance was originally published between 15 and 20 VAX MIPS, between 37,000 and 49,000 dhrystones per second, and between 6.1 and 8.2 double-precision megawhetstones per second for both 25 and 33 MHz version. A typical 50 MHz i486 executes around 40 million instructions per second (MIPS), reaching 50 MIPS peak performance. It is approximately twice as fast as the i386 or i286 per clock cycle. The i486's improved performance is thanks to its five-stage pipeline with all stages bound to a single cycle. The enhanced FPU unit on the chip was significantly faster than the i387 FPU per cycle. The intel 80387 FPU (\"i387\") was a separate, optional math coprocessor that was installed in a motherboard socket alongside the i386.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The i486 was succeeded by the original Pentium.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The concept of this microprocessor generation was discussed with Pat Gelsinger and John Crawford shortly after the release of 386 processor in 1985. The team started the computer simulation in early 1987. They finalized the logic and microcode function during 1988. The team finalized the database in February 1989 until the tape out on March 1st. They received the first silicon from the fabrication on March 20th.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The i486 was announced at Spring Comdex in April 10, 1989. At the announcement, Intel stated that samples would be available in the third quarter and production quantities would ship in the fourth quarter. The first i486-based PCs were announced in late 1989.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first major update to the i486 design came in March 1992 with the release of the clock-doubled 486DX2 series. It was the first time that the CPU core clock frequency was separated from the system bus clock frequency by using a dual clock multiplier, supporting 486DX2 chips at 40 and 50 MHz. The faster 66 MHz 486DX2-66 was released that August.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The fifth-generation Pentium processor launched in 1993, while Intel continued to produce i486 processors, including the triple-clock-rate 486DX4-100 with a 100 MHz clock speed and a L1 cache doubled to 16 KB.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Earlier, Intel had decided not to share its 80386 and 80486 technologies with AMD. However, AMD believed that their technology sharing agreement extended to the 80386 as a derivative of the 80286. AMD reverse-engineered the 386 and produced the 40 MHz Am386DX-40 chip, which was cheaper and had lower power consumption than Intel's best 33 MHz version. Intel attempted to prevent AMD from selling the processor, but AMD won in court, which allowed it to establish itself as a competitor.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "AMD continued to create clones, releasing the first-generation Am486 chip in April 1993 with clock frequencies of 25, 33 and 40 MHz. Second-generation Am486DX2 chips with 50, 66 and 80 MHz clock frequencies were released the following year. The Am486 series was completed with a 120 MHz DX4 chip in 1995.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "AMD's long-running 1987 arbitration lawsuit against Intel was settled in 1995, and AMD gained access to Intel's 80486 microcode. This led to the creation of two versions of AMD's 486 processor - one reverse-engineered from Intel's microcode, while the other used AMD's microcode in a clean room design process. However, the settlement also concluded that the 80486 would be AMD's last Intel clone.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Another 486 clone manufacturer was Cyrix, which was a fabless co-processor chip maker for 80286/386 systems. The first Cyrix 486 processors, the 486SLC and 486DLC, were released in 1992 and used the 80386 package. Both Texas Instruments-manufactured Cyrix processors were pin-compatible with 386SX/DX systems, which allowed them to become an upgrade option. However, these chips could not match the Intel 486 processors, having only 1 KB of cache memory and no built-in math coprocessor. In 1993, Cyrix released its own Cx486DX and DX2 processors, which were closer in performance to Intel's counterparts. Intel and Cyrix sued each other, with Intel filing for patent infringement and Cyrix for antitrust claims. In 1994, Cyrix won the patent infringement case and dropped its antitrust claim.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1995, both Cyrix and AMD began looking at a ready market for users wanting to upgrade their processors. Cyrix released a derivative 486 processor called the 5x86, based on the Cyrix M1 core, which was clocked up to 120 MHz and was an option for 486 Socket 3 motherboards. AMD released a 133 MHz Am5x86 upgrade chip, which was essentially an improved 80486 with double the cache and a quad multiplier that also worked with the original 486DX motherboards. Am5x86 was the first processor to use AMD's performance rating and was marketed as Am5x86-P75, with claims that it was equivalent to the Pentium 75. Kingston Technology launched a 'TurboChip' 486 system upgrade that used a 133 MHz Am5x86.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Intel responded by making a Pentium OverDrive upgrade chip for 486 motherboards, which was a modified Pentium core that ran up to 83 MHz on boards with a 25 or 33 MHz front-side bus clock. OverDrive wasn't popular due to speed and price. The 486 was declared obsolete as early as 1996, with a Florida school district's purchase of a fleet of 486DX4 machines in that year sparking controversy. New computers equipped with 486 processors in discount warehouses became scarce, and an IBM spokesperson called it a \"dinosaur\". Even after the Pentium series of processors gained a foothold in the market, however, Intel continued to produce 486 cores for industrial embedded applications. Intel discontinued production of i486 processors in late 2007.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The instruction set of the i486 is very similar to the i386, with the addition of a few extra instructions, such as CMPXCHG, a compare-and-swap atomic operation, and XADD, a fetch-and-add atomic operation that returned the original value (unlike a standard ADD, which returns flags only).",
"title": "Improvements"
},
{
"paragraph_id": 15,
"text": "The i486's performance architecture is a vast improvement over the i386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit (FPU) and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions (such as ALU reg,reg and ALU reg,im) could sustain single-clock-cycle throughput (one instruction completed every clock). In other words, it was running about 1.8 clocks per instruction. These improvements yielded a rough doubling in integer ALU performance over the i386 at the same clock rate. A 16 MHz i486 therefore had performance similar to a 33 MHz i386. The older design had to reach 50 MHz to be comparable with a 25 MHz i486 part.",
"title": "Improvements"
},
{
"paragraph_id": 16,
"text": "Just as in the i386, a flat 4 GB memory model could be implemented. All \"segment selector\" registers could be set to a neutral value in protected mode, or to zero in real mode, and using only the 32-bit \"offset registers\" (x86-terminology for general CPU registers used as address registers) as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were then normally mapped onto physical addresses by the paging system except when it was disabled. (Real mode had no virtual addresses.) Just as with the i386, circumventing memory segmentation could substantially improve performance for some operating systems and applications.",
"title": "Improvements"
},
{
"paragraph_id": 17,
"text": "On a typical PC motherboard, either four matched 30-pin (8-bit) SIMMs or one 72-pin (32-bit) SIMM per bank were required to fit the i486's 32-bit data bus. The address bus used 30-bits (A31..A2) complemented by four byte-select pins (instead of A0,A1) to allow for any 8/16/32-bit selection. This meant that the limit of directly addressable physical memory was 4 gigabytes as well (2 32-bit words = 2 8-bit words).",
"title": "Improvements"
},
{
"paragraph_id": 18,
"text": "Intel offered several suffixes and variants (see table). Variants include:",
"title": "Models"
},
{
"paragraph_id": 19,
"text": "The maximal internal clock frequency (on Intel's versions) ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers.",
"title": "Models"
},
{
"paragraph_id": 20,
"text": "One of the few i486 models specified for a 50 MHz bus (486DX-50) initially had overheating problems and was moved to the 0.8-micrometer fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it unpopular with mainstream consumers. Local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems. The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed (50 MHz), was nevertheless slower because the external bus ran at only 25 MHz. The i486DX2 at 66 MHz (with 33 MHz external bus) was faster than the 486DX-50, overall.",
"title": "Models"
},
{
"paragraph_id": 21,
"text": "More powerful i486 iterations such as the OverDrive and DX4 were less popular (the latter available as an OEM part only), as they came out after Intel had released the next-generation Pentium processor family. Certain steppings of the DX4 also officially supported 50 MHz bus operation, but it was a seldom-used feature.",
"title": "Models"
},
{
"paragraph_id": 22,
"text": "*WT = write-through cache strategy, WB = write-back cache strategy",
"title": "Models"
},
{
"paragraph_id": 23,
"text": "Processors compatible with the i486 were produced by companies such as IBM, Texas Instruments, AMD, Cyrix, UMC, and STMicroelectronics (formerly SGS-Thomson). Some were clones (identical at the microarchitectural level), others were clean room implementations of the Intel instruction set. (IBM's multiple-source requirement was one of the reasons behind its x86 manufacturing since the 80286.) The i486 was, however, covered by many Intel patents, including from the prior i386. Intel and IBM had broad cross-licenses of these patents, and AMD was granted rights to the relevant patents in the 1995 settlement of a lawsuit between the companies.",
"title": "Other makers of 486-like CPUs"
},
{
"paragraph_id": 24,
"text": "AMD produced several clones using a 40 MHz bus (486DX-40, 486DX/2-80, and 486DX/4-120) which had no Intel equivalent, as well as a part specified for 90 MHz, using a 30 MHz external clock, that was sold only to OEMs. The fastest running i486-compatible CPU, the Am5x86, ran at 133 MHz and was released by AMD in 1995. 150 MHz and 160 MHz parts were planned but never officially released.",
"title": "Other makers of 486-like CPUs"
},
{
"paragraph_id": 25,
"text": "Cyrix made a variety of i486-compatible processors, positioned at the cost-sensitive desktop and low-power (laptop) markets. Unlike AMD's 486 clones, the Cyrix processors were the result of clean-room reverse engineering. Cyrix's early offerings included the 486DLC and 486SLC, two hybrid chips that plugged into 386DX or SX sockets respectively, and offered 1 KB of cache (versus 8 KB for the then-current Intel/AMD parts). Cyrix also made \"real\" 486 processors, which plugged into the i486's socket and offered 2 or 8 KB of cache. Clock-for-clock, the Cyrix-made chips were generally slower than their Intel/AMD equivalents, though later products with 8 KB caches were more competitive, albeit late to market.",
"title": "Other makers of 486-like CPUs"
},
{
"paragraph_id": 26,
"text": "The Motorola 68040, while not i486 compatible, was often positioned as its equivalent in features and performance. Clock-for-clock basis the Motorola 68040 could significantly outperform the Intel chip. However, the i486 had the ability to be clocked significantly faster without overheating. Motorola 68040 performance lagged behind the later production i486 systems.",
"title": "Other makers of 486-like CPUs"
},
{
"paragraph_id": 27,
"text": "Early i486-based computers were equipped with several ISA slots (using an emulated PC/AT-bus) and sometimes one or two 8-bit-only slots (compatible with the PC/XT-bus). Many motherboards enabled overclocking of these from the default 6 or 8 MHz to perhaps 16.7 or 20 MHz (half the i486 bus clock) in several steps, often from within the BIOS setup. Especially older peripheral cards normally worked well at such speeds as they often used standard MSI chips instead of slower (at the time) custom VLSI designs. This could give significant performance gains (such as for old video cards moved from a 386 or 286 computer, for example). However, operation beyond 8 or 10 MHz could sometimes lead to stability problems, at least in systems equipped with SCSI or sound cards.",
"title": "Motherboards and buses"
},
{
"paragraph_id": 28,
"text": "Some motherboards came equipped with a 32-bit EISA bus that was backward compatible with the ISA-standard. EISA offered attractive features such as increased bandwidth, extended addressing, IRQ sharing, and card configuration through software (rather than through jumpers, DIP switches, etc.) However, EISA cards were expensive and therefore mostly employed in servers and workstations. Consumer desktops often used the simpler, faster VESA Local Bus (VLB). Unfortunately prone to electrical and timing-based instability; typical consumer desktops had ISA slots combined with a single VLB slot for a video card. VLB was gradually replaced by PCI during the final years of the i486 period. Few Pentium class motherboards had VLB support as VLB was based directly on the i486 bus; much different from the P5 Pentium-bus. ISA persisted through the P5 Pentium generation and was not completely displaced by PCI until the Pentium III era, although ISA persisted well into the Pentium 4 era, especially among industrial PCs.",
"title": "Motherboards and buses"
},
{
"paragraph_id": 29,
"text": "Late i486 boards were normally equipped with both PCI and ISA slots, and sometimes a single VLB slot. In this configuration, VLB or PCI throughput suffered depending on how buses were bridged. Initially, the VLB slot in these systems was usually fully compatible only with video cards (fitting as \"VESA\" stands for Video Electronics Standards Association); VLB-IDE, multi I/O, or SCSI cards could have problems on motherboards with PCI slots. The VL-Bus operated at the same clock speed as the i486-bus (basically a local bus) while the PCI bus also usually depended on the i486 clock but sometimes had a divider setting available via the BIOS. This could be set to 1/1 or 1/2, sometimes even 2/3 (for 50 MHz CPU clocks). Some motherboards limited the PCI clock to the specified maximum of 33 MHz and certain network cards depended on this frequency for correct bit-rates. The ISA clock was typically generated by a divider of the CPU/VLB/PCI clock.",
"title": "Motherboards and buses"
},
{
"paragraph_id": 30,
"text": "One of the earliest complete systems to use the i486 chip was the Apricot VX FT, produced by British hardware manufacturer Apricot Computers. Even overseas in the United States it was popularized as \"The World's First 486\".",
"title": "Motherboards and buses"
},
{
"paragraph_id": 31,
"text": "Later i486 boards supported Plug-And-Play, a specification designed by Microsoft that began as a part of Windows 95 to make component installation easier for consumers.",
"title": "Motherboards and buses"
},
{
"paragraph_id": 32,
"text": "The AMD Am5x86 and Cyrix Cx5x86 were the last i486 processors often used in late-generation i486 motherboards. They came with PCI slots and 72-pin SIMMs that were designed to run Windows 95, and also used for 80486 motherboards upgrades. While the Cyrix Cx5x86 faded when the Cyrix 6x86 took over, the AMD Am5x86 remained important given AMD K5 delays.",
"title": "Obsolescence"
},
{
"paragraph_id": 33,
"text": "Computers based on the i486 remained popular through the late 1990s, serving as low-end processors for entry-level PCs. Production for traditional desktop and laptop systems ceased in 1998, when Intel introduced the Celeron brand, though it continued to be produced for embedded systems through the late 2000s.",
"title": "Obsolescence"
},
{
"paragraph_id": 34,
"text": "In the general-purpose desktop computer role, i486-based machines remained in use into the early 2000s, especially as Windows 95 through 98 and Windows NT 4.0 were the last Microsoft operating systems to officially support i486-based systems. Windows 2000 could run on a i486-based machine, although with a less than optimal performance, due to the minimum hardware requirement of a Pentium processor. However, as they were overtaken by newer operating systems, i486 systems fell out of use except for backward compatibility with older programs (most notably games), especially given problems running on newer operating systems. However, DOSBox was available for later operating systems and provides emulation of the i486 instruction set, as well as full compatibility with most DOS-based programs.",
"title": "Obsolescence"
},
{
"paragraph_id": 35,
"text": "The i486 was eventually overtaken by the Pentium for personal computer applications, although Intel continued production for use in embedded systems. In May 2006, Intel announced that production of the i486 would stop at the end of September 2007.",
"title": "Obsolescence"
}
]
| The Intel 486, officially named i486 and also known as 80486, is a microprocessor. It is a higher-performance follow-up to the Intel 386. The i486 was introduced in 1989. It represents the fourth generation of binary compatible CPUs following the 8086 of 1978, the Intel 80286 of 1982, and 1985's i386. It was the first tightly-pipelined x86 design as well as the first x86 chip to include more than one million transistors. It offered a large on-chip cache and an integrated floating-point unit. When it was announced, the initial performance was originally published between 15 and 20 VAX MIPS, between 37,000 and 49,000 dhrystones per second, and between 6.1 and 8.2 double-precision megawhetstones per second for both 25 and 33 MHz version. A typical 50 MHz i486 executes around 40 million instructions per second (MIPS), reaching 50 MIPS peak performance. It is approximately twice as fast as the i386 or i286 per clock cycle. The i486's improved performance is thanks to its five-stage pipeline with all stages bound to a single cycle. The enhanced FPU unit on the chip was significantly faster than the i387 FPU per cycle. The intel 80387 FPU ("i387") was a separate, optional math coprocessor that was installed in a motherboard socket alongside the i386. The i486 was succeeded by the original Pentium. | 2001-10-15T16:35:28Z | 2023-09-28T09:29:04Z | [
"Template:Authority control",
"Template:Anchor",
"Template:Unsourced",
"Template:Notelist",
"Template:Cite news",
"Template:Citation needed",
"Template:Intel processors",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Lowercase title",
"Template:Infobox CPU",
"Template:Efn",
"Template:Reflist",
"Template:Use mdy dates",
"Template:Short description",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/I486 |
15,164 | I486SX | The i486SX was a microprocessor originally released by Intel in 1991. It was a modified Intel i486DX microprocessor with its floating-point unit (FPU) disabled. It was intended as a lower-cost CPU for use in low-end systems, adapting the SX suffix of the earlier i386SX in order to connote a lower-cost option. However, unlike the i386SX, which had a 16-bit external data bus and a 24-bit external address bus (compared to the fully 32-bit i386, its higher-cost counterpoint), the i486SX was entirely 32-bit.
In the early 1990s, common applications, such as word processors and database applications, did not need or benefit from a floating-point unit, such as that included in the i486, introduced in 1989. Among the rare exceptions were CAD applications, which could often simulate floating point operations in software, but benefited from a hardware floating point unit immensely. AMD had begun manufacturing its i386DX clone, the Am386, which was faster than Intel's. To respond to this new situation, Intel wanted to provide a lower cost i486 CPU for system integrators, but without sacrificing the better profit margins of a full i486. Intel were able to accomplish this with the i486SX, the first revisions of which were practically identical to the i486 but with its floating-point unit internally wired to be disabled. The i486SX was introduced in mid-1991 at 20 MHz in a pin grid array (PGA) package. Later versions of the i486SX, from 1992 onward, had the FPU entirely removed for cost-cutting reasons and comes in surface-mount packages as well.
The first computer system to ship with an i486SX on its motherboard from the factory was Advanced Logic Research's Business VEISA 486/20SX in April 1991. Initial reviews of the i486SX chip were generally poor among technology publications and the buying public, who deemed it an example of crippleware.
Many systems allowed the user to upgrade the i486SX to a CPU with the FPU enabled. The upgrade was shipped as the i487, which was a full-blown i486DX chip with an extra pin. The extra pin prevents the chip from being installed incorrectly. Although i486SX devices were not used at all when the i487 was installed, they were hard to remove because the i486SX was typically installed in non-ZIF sockets or in a plastic package that was surface mounted on the motherboard. Later OverDrive processors also plugged into the socket and offered performance enhancements as well. | [
{
"paragraph_id": 0,
"text": "The i486SX was a microprocessor originally released by Intel in 1991. It was a modified Intel i486DX microprocessor with its floating-point unit (FPU) disabled. It was intended as a lower-cost CPU for use in low-end systems, adapting the SX suffix of the earlier i386SX in order to connote a lower-cost option. However, unlike the i386SX, which had a 16-bit external data bus and a 24-bit external address bus (compared to the fully 32-bit i386, its higher-cost counterpoint), the i486SX was entirely 32-bit.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the early 1990s, common applications, such as word processors and database applications, did not need or benefit from a floating-point unit, such as that included in the i486, introduced in 1989. Among the rare exceptions were CAD applications, which could often simulate floating point operations in software, but benefited from a hardware floating point unit immensely. AMD had begun manufacturing its i386DX clone, the Am386, which was faster than Intel's. To respond to this new situation, Intel wanted to provide a lower cost i486 CPU for system integrators, but without sacrificing the better profit margins of a full i486. Intel were able to accomplish this with the i486SX, the first revisions of which were practically identical to the i486 but with its floating-point unit internally wired to be disabled. The i486SX was introduced in mid-1991 at 20 MHz in a pin grid array (PGA) package. Later versions of the i486SX, from 1992 onward, had the FPU entirely removed for cost-cutting reasons and comes in surface-mount packages as well.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "The first computer system to ship with an i486SX on its motherboard from the factory was Advanced Logic Research's Business VEISA 486/20SX in April 1991. Initial reviews of the i486SX chip were generally poor among technology publications and the buying public, who deemed it an example of crippleware.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Many systems allowed the user to upgrade the i486SX to a CPU with the FPU enabled. The upgrade was shipped as the i487, which was a full-blown i486DX chip with an extra pin. The extra pin prevents the chip from being installed incorrectly. Although i486SX devices were not used at all when the i487 was installed, they were hard to remove because the i486SX was typically installed in non-ZIF sockets or in a plastic package that was surface mounted on the motherboard. Later OverDrive processors also plugged into the socket and offered performance enhancements as well.",
"title": "Overview"
}
]
| The i486SX was a microprocessor originally released by Intel in 1991. It was a modified Intel i486DX microprocessor with its floating-point unit (FPU) disabled. It was intended as a lower-cost CPU for use in low-end systems, adapting the SX suffix of the earlier i386SX in order to connote a lower-cost option. However, unlike the i386SX, which had a 16-bit external data bus and a 24-bit external address bus, the i486SX was entirely 32-bit. | 2001-10-15T19:12:50Z | 2023-10-07T14:24:44Z | [
"Template:Efn",
"Template:Cite journal",
"Template:Lowercase title",
"Template:Multiple image",
"Template:Rp",
"Template:Notelist",
"Template:Reflist",
"Template:Cite web",
"Template:Intel processors"
]
| https://en.wikipedia.org/wiki/I486SX |
15,165 | Ivory | Ivory is a hard, white material from the tusks (traditionally from elephants) and teeth of animals, that consists mainly of dentine, one of the physical structures of teeth and tusks. The chemical structure of the teeth and tusks of mammals is the same, regardless of the species of origin, but ivory contains structures of mineralised collagen. The trade in certain teeth and tusks other than elephant is well established and widespread; therefore, "ivory" can correctly be used to describe any mammalian teeth or tusks of commercial interest which are large enough to be carved or scrimshawed.
Besides natural ivory, ivory can also be produced synthetically, hence (unlike natural ivory) not requiring the retrieval of the material from animals. Tagua nuts can also be carved like ivory.
The trade of finished goods of ivory products has its origins in the Indus Valley. Ivory is a main product that is seen in abundance and was used for trading in Harappan civilization. Finished ivory products that were seen in Harappan sites include kohl sticks, pins, awls, hooks, toggles, combs, game pieces, dice, inlay and other personal ornaments.
Ivory has been valued since ancient times in art or manufacturing for making a range of items from ivory carvings to false teeth, piano keys, fans, and dominoes. Elephant ivory is the most important source, but ivory from mammoth, walrus, hippopotamus, sperm whale, orca, narwhal and warthog are used as well. Elk also have two ivory teeth, which are believed to be the remnants of tusks from their ancestors.
The national and international trade in natural ivory of threatened species such as African and Asian elephants is illegal. The word ivory ultimately derives from the ancient Egyptian âb, âbu ('elephant'), through the Latin ebor- or ebur.
Both the Greek and Roman civilizations practiced ivory carving to make large quantities of high value works of art, precious religious objects, and decorative boxes for costly objects. Ivory was often used to form the white of the eyes of statues.
There is some evidence of either whale or walrus ivory used by the ancient Irish. Solinus, a Roman writer in the 3rd century claimed that the Celtic peoples in Ireland would decorate their sword-hilts with the 'teeth of beasts that swim in the sea'. Adomnan of Iona wrote a story about St Columba giving a sword decorated with carved ivory as a gift that a penitent would bring to his master so he could redeem himself from slavery.
The Syrian and North African elephant populations were reduced to extinction, probably due to the demand for ivory in the Classical world.
The Chinese have long valued ivory for both art and utilitarian objects. Early reference to the Chinese export of ivory is recorded after the Chinese explorer Zhang Qian ventured to the west to form alliances to enable the eventual free movement of Chinese goods to the west; as early as the first century BC, ivory was moved along the Northern Silk Road for consumption by western nations. Southeast Asian kingdoms included tusks of the Indian elephant in their annual tribute caravans to China. Chinese craftsmen carved ivory to make everything from images of deities to the pipe stems and end pieces of opium pipes.
In Japan, ivory carvings became popular in the 17th century during the Edo period, and many netsuke and kiseru, on which animals and legendary creatures were carved, and inro, on which ivory was inlaid, were made. From the mid-1800s, the new Meiji government's policy of promoting and exporting arts and crafts led to the frequent display of elaborate ivory crafts at World's fair. Among them, the best works were admired because they were purchased by Western museums, wealthy people, and the Japanese Imperial Family.
The Buddhist cultures of Southeast Asia, including Myanmar, Thailand, Laos and Cambodia, traditionally harvested ivory from their domesticated elephants. Ivory was prized for containers due to its ability to keep an airtight seal. It was also commonly carved into elaborate seals utilized by officials to "sign" documents and decrees by stamping them with their unique official seal.
In Southeast Asian countries, where Muslim Malay peoples live, such as Malaysia, Indonesia and the Philippines, ivory was the material of choice for making the handles of kris daggers. In the Philippines, ivory was also used to craft the faces and hands of Catholic icons and images of saints prevalent in the Santero culture.
Tooth and tusk ivory can be carved into a vast variety of shapes and objects. Examples of modern carved ivory objects are okimono, netsukes, jewelry, flatware handles, furniture inlays, and piano keys. Additionally, warthog tusks, and teeth from sperm whales, orcas and hippos can also be scrimshawed or superficially carved, thus retaining their morphologically recognizable shapes.
As trade with Africa expanded during the first part of the 1800s, ivory became readily available. Up to 90 percent of the ivory imported into the United States was processed, at one time, in Connecticut where Deep River and Ivoryton in 1860s became the centers of ivory milling, in particular, due to the demand for ivory piano keys.
Ivory usage in the last thirty years has moved towards mass production of souvenirs and jewelry. In Japan, the increase in wealth sparked consumption of solid ivory hanko – name seals – which before this time had been made of wood. These hanko can be carved out in a matter of seconds using machinery and were partly responsible for massive African elephant decline in the 1980s, when the African elephant population went from 1.3 million to around 600,000 in ten years.
Before plastics were introduced, ivory had many ornamental and practical uses, mainly because of the white color it presents when processed. It was formerly used to make cutlery handles, billiard balls, piano keys, Scottish bagpipes, buttons and a wide range of ornamental items.
Synthetic substitutes for ivory in the use of most of these items have been developed since 1800: the billiard industry challenged inventors to come up with an alternative material that could be manufactured; the piano industry abandoned ivory as a key covering material in the 1970s.
Ivory can be taken from dead animals – however, most ivory came from elephants that were killed for their tusks. For example, in 1930 to acquire 40 tons of ivory required the killing of approximately 700 elephants. Other animals which are now endangered were also preyed upon, for example, hippos, which have very hard white ivory prized for making artificial teeth. In the first half of the 20th century, Kenyan elephant herds were devastated because of demand for ivory, to be used for piano keys.
During the Art Deco era from 1912 to 1940, dozens (if not hundreds) of European artists used ivory in the production of chryselephantine statues. Two of the most frequent users of ivory in their sculptured artworks were Ferdinand Preiss and Claire Colinet.
While many uses of ivory are purely ornamental in nature, it often must be carved and manipulated into different shapes to achieve the desired form. Other applications, such as ivory piano keys, introduce repeated wear and surface handling of the material. It is therefore essential to consider the mechanical properties of ivory when designing alternatives.
Elephant tusks are the animal's incisors, so the composition of ivory is unsurprisingly similar to that of teeth in several other mammals. It is composed of dentine, a biomineral composite constructed from collagen fibers mineralized with hydroxyapatite. This composite lends ivory the impressive mechanical properties—high stiffness, strength, hardness, and toughness—required for its use in the animal's day-to-day activities. Ivory has a measured hardness of 35 on the Vickers scale, exceeding that of bone. It also has a flexural modulus of 14 GPa, a flexural strength of 378 MPa a fracture toughness of 2.05 MPam. These measured values indicate that ivory mechanically outperforms most of its most common alternatives, including celluloid plastic and polyethylene terephthalate.
Ivory's mechanical properties result from the microstructure of the dentine tissue. It is thought that the structural arrangement of mineralized collagen fibers could contribute to the checkerboard-like Schreger pattern observed in polished ivory samples. This is often used as an attribute in ivory identification. As well as being an optical feature, the Schreger pattern could point towards a micropattern well-designed to prevent crack propagation by dispersing stresses. Additionally, this intricate microstructure lends a strong anisotropy to ivory's mechanical characteristics. Separate hardness measurements on three orthogonal tusk directions indicated that circumferential planes of tusk had up to 25% greater hardness than radial planes of the same specimen. During hardness testing, inelastic and elastic recovery was observed on circumferential planes while the radial planes displayed plastic deformation. This implies that ivory has directional viscoelasticity. These anisotropic properties can be explained by the reinforcement of collagen fibers in the composite oriented along the circumference.
Owing to the rapid decline in the populations of the animals that produce it, the importation and sale of ivory in many countries is banned or severely restricted. In the ten years preceding a decision in 1989 by CITES to ban international trade in African elephant ivory, the population of African elephants declined from 1.3 million to around 600,000. It was found by investigators from the Environmental Investigation Agency (EIA) that CITES sales of stockpiles from Singapore and Burundi (270 tonnes and 89.5 tonnes respectively) had created a system that increased the value of ivory on the international market, thus rewarding international smugglers and giving them the ability to control the trade and continue smuggling new ivory.
Since the ivory ban, some Southern African countries have claimed their elephant populations are stable or increasing, and argued that ivory sales would support their conservation efforts. Other African countries oppose this position, stating that renewed ivory trading puts their own elephant populations under greater threat from poachers reacting to demand. CITES allowed the sale of 49 tonnes of ivory from Zimbabwe, Namibia and Botswana in 1997 to Japan.
In 2007, under pressure from the International Fund for Animal Welfare, eBay banned all international sales of elephant-ivory products. The decision came after several mass slaughters of African elephants, most notably the 2006 Zakouma elephant slaughter in Chad. The IFAW found that up to 90% of the elephant-ivory transactions on eBay violated their own wildlife policies and could potentially be illegal. In October 2008, eBay expanded the ban, disallowing any sales of ivory on eBay.
A more recent sale in 2008 of 108 tonnes from the three countries and South Africa took place to Japan and China. The inclusion of China as an "approved" importing country created enormous controversy, despite being supported by CITES, the World Wide Fund for Nature and Traffic. They argued that China had controls in place and the sale might depress prices. However, the price of ivory in China has skyrocketed. Some believe this may be due to deliberate price fixing by those who bought the stockpile, echoing the warnings from the Japan Wildlife Conservation Society on price-fixing after sales to Japan in 1997, and monopoly given to traders who bought stockpiles from Burundi and Singapore in the 1980s.
A 2019 peer-reviewed study reported that the rate of African elephant poaching was in decline, with the annual poaching mortality rate peaking at over 10% in 2011 and falling to below 4% by 2017. The study found that the "annual poaching rates in 53 sites strongly correlate with proxies of ivory demand in the main Chinese markets, whereas between-country and between-site variation is strongly associated with indicators of corruption and poverty." Based on these findings, the study authors recommended action to both reduce demand for ivory in China and other main markets and to decrease corruption and poverty in Africa.
In 2006, nineteen African countries signed the "Accra Declaration" calling for a total ivory trade ban, and twenty range states attended a meeting in Kenya calling for a 20-year moratorium in 2007.
Methods of obtaining ivory can be divided into:
The use and trade of elephant ivory have become controversial because they have contributed to seriously declining elephant populations in many countries. It is estimated that consumption in Great Britain alone in 1831 amounted to the deaths of nearly 4,000 elephants. In 1975, the Asian elephant was placed on Appendix I of the Convention on International Trade in Endangered Species (CITES), which prevents international trade between member states of species that are threatened by trade. The African elephant was placed on Appendix I in January 1990. Since then, some southern African countries have had their populations of elephants "downlisted" to Appendix II, allowing the domestic trade of non-ivory items; there have also been two "one off" sales of ivory stockpiles.
In June 2015, more than a ton of confiscated ivory was crushed in New York City's Times Square by the Wildlife Conservation Society to send a message that the illegal trade will not be tolerated. The ivory, confiscated in New York and Philadelphia, was sent up a conveyor belt into a rock crusher. The Wildlife Conservation Society has pointed out that the global ivory trade leads to the slaughter of up to 35,000 elephants a year in Africa. In June 2018, Conservative MEPs' Deputy Leader Jacqueline Foster MEP urged the EU to follow the UK's lead and introduce a tougher ivory ban across Europe.
China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015. In September of the same year, China and the U.S. announced they would "enact a nearly complete ban on the import and export of ivory." The Chinese market has a high degree of influence on the elephant population.
Trade in the ivory from the tusks of dead woolly mammoths frozen in the tundra has occurred for 300 years and continues to be legal. Mammoth ivory is used today to make handcrafted knives and similar implements. Mammoth ivory is rare and costly because mammoths have been extinct for millennia, and scientists are hesitant to sell museum-worthy specimens in pieces. Some estimates suggest that 10 million mammoths are still buried in Siberia.
Fossil walrus ivory from animals that died before 1972 is legal to buy and sell or possess in the United States, unlike many other types of ivory.
The ancestors of elk had teeth, also known as elk ivory, that protruded outwards, similar to animals that have tusks, they were used as a protective measure against predators. Alongside the use of protective measures, the tusks were used during the mating season to be used for dominance, as their antlers were smaller back then compared to now. Evolution made the antlers bigger and the use of their tusks diminished as antlers grew, making them nothing more than teeth in their mouths.
These teeth have the same chemical compound as the ivory found in the highly used and poached elephant tusks, making it another good alternative when it comes to taking ivory as the teeth can be possibly removed without harming the elk themselves.
Among Indian tribes, elk teeth has major significance when it comes to jewelry. Among women, men wore them as well as. Either through bracelets, earrings, and chokers, there was deeper meaning for both men and women within the tribes. For the women, it was believed that it would bring in good luck and good health. As for the men, it was seen that they were a good hunter.
Ivory can also be produced synthetically.
A species of hard nut is gaining popularity as a replacement for ivory, although its size limits its usability. It is sometimes called vegetable ivory, or tagua, and is the seed endosperm of the ivory nut palm commonly found in coastal rainforests of Ecuador, Peru and Colombia. | [
{
"paragraph_id": 0,
"text": "Ivory is a hard, white material from the tusks (traditionally from elephants) and teeth of animals, that consists mainly of dentine, one of the physical structures of teeth and tusks. The chemical structure of the teeth and tusks of mammals is the same, regardless of the species of origin, but ivory contains structures of mineralised collagen. The trade in certain teeth and tusks other than elephant is well established and widespread; therefore, \"ivory\" can correctly be used to describe any mammalian teeth or tusks of commercial interest which are large enough to be carved or scrimshawed.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Besides natural ivory, ivory can also be produced synthetically, hence (unlike natural ivory) not requiring the retrieval of the material from animals. Tagua nuts can also be carved like ivory.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The trade of finished goods of ivory products has its origins in the Indus Valley. Ivory is a main product that is seen in abundance and was used for trading in Harappan civilization. Finished ivory products that were seen in Harappan sites include kohl sticks, pins, awls, hooks, toggles, combs, game pieces, dice, inlay and other personal ornaments.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Ivory has been valued since ancient times in art or manufacturing for making a range of items from ivory carvings to false teeth, piano keys, fans, and dominoes. Elephant ivory is the most important source, but ivory from mammoth, walrus, hippopotamus, sperm whale, orca, narwhal and warthog are used as well. Elk also have two ivory teeth, which are believed to be the remnants of tusks from their ancestors.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The national and international trade in natural ivory of threatened species such as African and Asian elephants is illegal. The word ivory ultimately derives from the ancient Egyptian âb, âbu ('elephant'), through the Latin ebor- or ebur.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Both the Greek and Roman civilizations practiced ivory carving to make large quantities of high value works of art, precious religious objects, and decorative boxes for costly objects. Ivory was often used to form the white of the eyes of statues.",
"title": "Uses"
},
{
"paragraph_id": 6,
"text": "There is some evidence of either whale or walrus ivory used by the ancient Irish. Solinus, a Roman writer in the 3rd century claimed that the Celtic peoples in Ireland would decorate their sword-hilts with the 'teeth of beasts that swim in the sea'. Adomnan of Iona wrote a story about St Columba giving a sword decorated with carved ivory as a gift that a penitent would bring to his master so he could redeem himself from slavery.",
"title": "Uses"
},
{
"paragraph_id": 7,
"text": "The Syrian and North African elephant populations were reduced to extinction, probably due to the demand for ivory in the Classical world.",
"title": "Uses"
},
{
"paragraph_id": 8,
"text": "The Chinese have long valued ivory for both art and utilitarian objects. Early reference to the Chinese export of ivory is recorded after the Chinese explorer Zhang Qian ventured to the west to form alliances to enable the eventual free movement of Chinese goods to the west; as early as the first century BC, ivory was moved along the Northern Silk Road for consumption by western nations. Southeast Asian kingdoms included tusks of the Indian elephant in their annual tribute caravans to China. Chinese craftsmen carved ivory to make everything from images of deities to the pipe stems and end pieces of opium pipes.",
"title": "Uses"
},
{
"paragraph_id": 9,
"text": "In Japan, ivory carvings became popular in the 17th century during the Edo period, and many netsuke and kiseru, on which animals and legendary creatures were carved, and inro, on which ivory was inlaid, were made. From the mid-1800s, the new Meiji government's policy of promoting and exporting arts and crafts led to the frequent display of elaborate ivory crafts at World's fair. Among them, the best works were admired because they were purchased by Western museums, wealthy people, and the Japanese Imperial Family.",
"title": "Uses"
},
{
"paragraph_id": 10,
"text": "The Buddhist cultures of Southeast Asia, including Myanmar, Thailand, Laos and Cambodia, traditionally harvested ivory from their domesticated elephants. Ivory was prized for containers due to its ability to keep an airtight seal. It was also commonly carved into elaborate seals utilized by officials to \"sign\" documents and decrees by stamping them with their unique official seal.",
"title": "Uses"
},
{
"paragraph_id": 11,
"text": "In Southeast Asian countries, where Muslim Malay peoples live, such as Malaysia, Indonesia and the Philippines, ivory was the material of choice for making the handles of kris daggers. In the Philippines, ivory was also used to craft the faces and hands of Catholic icons and images of saints prevalent in the Santero culture.",
"title": "Uses"
},
{
"paragraph_id": 12,
"text": "Tooth and tusk ivory can be carved into a vast variety of shapes and objects. Examples of modern carved ivory objects are okimono, netsukes, jewelry, flatware handles, furniture inlays, and piano keys. Additionally, warthog tusks, and teeth from sperm whales, orcas and hippos can also be scrimshawed or superficially carved, thus retaining their morphologically recognizable shapes.",
"title": "Uses"
},
{
"paragraph_id": 13,
"text": "As trade with Africa expanded during the first part of the 1800s, ivory became readily available. Up to 90 percent of the ivory imported into the United States was processed, at one time, in Connecticut where Deep River and Ivoryton in 1860s became the centers of ivory milling, in particular, due to the demand for ivory piano keys.",
"title": "Uses"
},
{
"paragraph_id": 14,
"text": "Ivory usage in the last thirty years has moved towards mass production of souvenirs and jewelry. In Japan, the increase in wealth sparked consumption of solid ivory hanko – name seals – which before this time had been made of wood. These hanko can be carved out in a matter of seconds using machinery and were partly responsible for massive African elephant decline in the 1980s, when the African elephant population went from 1.3 million to around 600,000 in ten years.",
"title": "Uses"
},
{
"paragraph_id": 15,
"text": "Before plastics were introduced, ivory had many ornamental and practical uses, mainly because of the white color it presents when processed. It was formerly used to make cutlery handles, billiard balls, piano keys, Scottish bagpipes, buttons and a wide range of ornamental items.",
"title": "Consumption before plastics"
},
{
"paragraph_id": 16,
"text": "Synthetic substitutes for ivory in the use of most of these items have been developed since 1800: the billiard industry challenged inventors to come up with an alternative material that could be manufactured; the piano industry abandoned ivory as a key covering material in the 1970s.",
"title": "Consumption before plastics"
},
{
"paragraph_id": 17,
"text": "Ivory can be taken from dead animals – however, most ivory came from elephants that were killed for their tusks. For example, in 1930 to acquire 40 tons of ivory required the killing of approximately 700 elephants. Other animals which are now endangered were also preyed upon, for example, hippos, which have very hard white ivory prized for making artificial teeth. In the first half of the 20th century, Kenyan elephant herds were devastated because of demand for ivory, to be used for piano keys.",
"title": "Consumption before plastics"
},
{
"paragraph_id": 18,
"text": "During the Art Deco era from 1912 to 1940, dozens (if not hundreds) of European artists used ivory in the production of chryselephantine statues. Two of the most frequent users of ivory in their sculptured artworks were Ferdinand Preiss and Claire Colinet.",
"title": "Consumption before plastics"
},
{
"paragraph_id": 19,
"text": "While many uses of ivory are purely ornamental in nature, it often must be carved and manipulated into different shapes to achieve the desired form. Other applications, such as ivory piano keys, introduce repeated wear and surface handling of the material. It is therefore essential to consider the mechanical properties of ivory when designing alternatives.",
"title": "Mechanical characteristics"
},
{
"paragraph_id": 20,
"text": "Elephant tusks are the animal's incisors, so the composition of ivory is unsurprisingly similar to that of teeth in several other mammals. It is composed of dentine, a biomineral composite constructed from collagen fibers mineralized with hydroxyapatite. This composite lends ivory the impressive mechanical properties—high stiffness, strength, hardness, and toughness—required for its use in the animal's day-to-day activities. Ivory has a measured hardness of 35 on the Vickers scale, exceeding that of bone. It also has a flexural modulus of 14 GPa, a flexural strength of 378 MPa a fracture toughness of 2.05 MPam. These measured values indicate that ivory mechanically outperforms most of its most common alternatives, including celluloid plastic and polyethylene terephthalate.",
"title": "Mechanical characteristics"
},
{
"paragraph_id": 21,
"text": "Ivory's mechanical properties result from the microstructure of the dentine tissue. It is thought that the structural arrangement of mineralized collagen fibers could contribute to the checkerboard-like Schreger pattern observed in polished ivory samples. This is often used as an attribute in ivory identification. As well as being an optical feature, the Schreger pattern could point towards a micropattern well-designed to prevent crack propagation by dispersing stresses. Additionally, this intricate microstructure lends a strong anisotropy to ivory's mechanical characteristics. Separate hardness measurements on three orthogonal tusk directions indicated that circumferential planes of tusk had up to 25% greater hardness than radial planes of the same specimen. During hardness testing, inelastic and elastic recovery was observed on circumferential planes while the radial planes displayed plastic deformation. This implies that ivory has directional viscoelasticity. These anisotropic properties can be explained by the reinforcement of collagen fibers in the composite oriented along the circumference.",
"title": "Mechanical characteristics"
},
{
"paragraph_id": 22,
"text": "Owing to the rapid decline in the populations of the animals that produce it, the importation and sale of ivory in many countries is banned or severely restricted. In the ten years preceding a decision in 1989 by CITES to ban international trade in African elephant ivory, the population of African elephants declined from 1.3 million to around 600,000. It was found by investigators from the Environmental Investigation Agency (EIA) that CITES sales of stockpiles from Singapore and Burundi (270 tonnes and 89.5 tonnes respectively) had created a system that increased the value of ivory on the international market, thus rewarding international smugglers and giving them the ability to control the trade and continue smuggling new ivory.",
"title": "Availability"
},
{
"paragraph_id": 23,
"text": "Since the ivory ban, some Southern African countries have claimed their elephant populations are stable or increasing, and argued that ivory sales would support their conservation efforts. Other African countries oppose this position, stating that renewed ivory trading puts their own elephant populations under greater threat from poachers reacting to demand. CITES allowed the sale of 49 tonnes of ivory from Zimbabwe, Namibia and Botswana in 1997 to Japan.",
"title": "Availability"
},
{
"paragraph_id": 24,
"text": "In 2007, under pressure from the International Fund for Animal Welfare, eBay banned all international sales of elephant-ivory products. The decision came after several mass slaughters of African elephants, most notably the 2006 Zakouma elephant slaughter in Chad. The IFAW found that up to 90% of the elephant-ivory transactions on eBay violated their own wildlife policies and could potentially be illegal. In October 2008, eBay expanded the ban, disallowing any sales of ivory on eBay.",
"title": "Availability"
},
{
"paragraph_id": 25,
"text": "A more recent sale in 2008 of 108 tonnes from the three countries and South Africa took place to Japan and China. The inclusion of China as an \"approved\" importing country created enormous controversy, despite being supported by CITES, the World Wide Fund for Nature and Traffic. They argued that China had controls in place and the sale might depress prices. However, the price of ivory in China has skyrocketed. Some believe this may be due to deliberate price fixing by those who bought the stockpile, echoing the warnings from the Japan Wildlife Conservation Society on price-fixing after sales to Japan in 1997, and monopoly given to traders who bought stockpiles from Burundi and Singapore in the 1980s.",
"title": "Availability"
},
{
"paragraph_id": 26,
"text": "A 2019 peer-reviewed study reported that the rate of African elephant poaching was in decline, with the annual poaching mortality rate peaking at over 10% in 2011 and falling to below 4% by 2017. The study found that the \"annual poaching rates in 53 sites strongly correlate with proxies of ivory demand in the main Chinese markets, whereas between-country and between-site variation is strongly associated with indicators of corruption and poverty.\" Based on these findings, the study authors recommended action to both reduce demand for ivory in China and other main markets and to decrease corruption and poverty in Africa.",
"title": "Availability"
},
{
"paragraph_id": 27,
"text": "In 2006, nineteen African countries signed the \"Accra Declaration\" calling for a total ivory trade ban, and twenty range states attended a meeting in Kenya calling for a 20-year moratorium in 2007.",
"title": "Availability"
},
{
"paragraph_id": 28,
"text": "Methods of obtaining ivory can be divided into:",
"title": "Availability"
},
{
"paragraph_id": 29,
"text": "The use and trade of elephant ivory have become controversial because they have contributed to seriously declining elephant populations in many countries. It is estimated that consumption in Great Britain alone in 1831 amounted to the deaths of nearly 4,000 elephants. In 1975, the Asian elephant was placed on Appendix I of the Convention on International Trade in Endangered Species (CITES), which prevents international trade between member states of species that are threatened by trade. The African elephant was placed on Appendix I in January 1990. Since then, some southern African countries have had their populations of elephants \"downlisted\" to Appendix II, allowing the domestic trade of non-ivory items; there have also been two \"one off\" sales of ivory stockpiles.",
"title": "Availability"
},
{
"paragraph_id": 30,
"text": "In June 2015, more than a ton of confiscated ivory was crushed in New York City's Times Square by the Wildlife Conservation Society to send a message that the illegal trade will not be tolerated. The ivory, confiscated in New York and Philadelphia, was sent up a conveyor belt into a rock crusher. The Wildlife Conservation Society has pointed out that the global ivory trade leads to the slaughter of up to 35,000 elephants a year in Africa. In June 2018, Conservative MEPs' Deputy Leader Jacqueline Foster MEP urged the EU to follow the UK's lead and introduce a tougher ivory ban across Europe.",
"title": "Availability"
},
{
"paragraph_id": 31,
"text": "China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015. In September of the same year, China and the U.S. announced they would \"enact a nearly complete ban on the import and export of ivory.\" The Chinese market has a high degree of influence on the elephant population.",
"title": "Availability"
},
{
"paragraph_id": 32,
"text": "Trade in the ivory from the tusks of dead woolly mammoths frozen in the tundra has occurred for 300 years and continues to be legal. Mammoth ivory is used today to make handcrafted knives and similar implements. Mammoth ivory is rare and costly because mammoths have been extinct for millennia, and scientists are hesitant to sell museum-worthy specimens in pieces. Some estimates suggest that 10 million mammoths are still buried in Siberia.",
"title": "Availability"
},
{
"paragraph_id": 33,
"text": "Fossil walrus ivory from animals that died before 1972 is legal to buy and sell or possess in the United States, unlike many other types of ivory.",
"title": "Availability"
},
{
"paragraph_id": 34,
"text": "The ancestors of elk had teeth, also known as elk ivory, that protruded outwards, similar to animals that have tusks, they were used as a protective measure against predators. Alongside the use of protective measures, the tusks were used during the mating season to be used for dominance, as their antlers were smaller back then compared to now. Evolution made the antlers bigger and the use of their tusks diminished as antlers grew, making them nothing more than teeth in their mouths.",
"title": "Availability"
},
{
"paragraph_id": 35,
"text": "These teeth have the same chemical compound as the ivory found in the highly used and poached elephant tusks, making it another good alternative when it comes to taking ivory as the teeth can be possibly removed without harming the elk themselves.",
"title": "Availability"
},
{
"paragraph_id": 36,
"text": "Among Indian tribes, elk teeth has major significance when it comes to jewelry. Among women, men wore them as well as. Either through bracelets, earrings, and chokers, there was deeper meaning for both men and women within the tribes. For the women, it was believed that it would bring in good luck and good health. As for the men, it was seen that they were a good hunter.",
"title": "Availability"
},
{
"paragraph_id": 37,
"text": "Ivory can also be produced synthetically.",
"title": "Availability"
},
{
"paragraph_id": 38,
"text": "A species of hard nut is gaining popularity as a replacement for ivory, although its size limits its usability. It is sometimes called vegetable ivory, or tagua, and is the seed endosperm of the ivory nut palm commonly found in coastal rainforests of Ecuador, Peru and Colombia.",
"title": "Availability"
}
]
| Ivory is a hard, white material from the tusks and teeth of animals, that consists mainly of dentine, one of the physical structures of teeth and tusks. The chemical structure of the teeth and tusks of mammals is the same, regardless of the species of origin, but ivory contains structures of mineralised collagen. The trade in certain teeth and tusks other than elephant is well established and widespread; therefore, "ivory" can correctly be used to describe any mammalian teeth or tusks of commercial interest which are large enough to be carved or scrimshawed. Besides natural ivory, ivory can also be produced synthetically, hence not requiring the retrieval of the material from animals. Tagua nuts can also be carved like ivory. The trade of finished goods of ivory products has its origins in the Indus Valley. Ivory is a main product that is seen in abundance and was used for trading in Harappan civilization. Finished ivory products that were seen in Harappan sites include kohl sticks, pins, awls, hooks, toggles, combs, game pieces, dice, inlay and other personal ornaments. Ivory has been valued since ancient times in art or manufacturing for making a range of items from ivory carvings to false teeth, piano keys, fans, and dominoes. Elephant ivory is the most important source, but ivory from mammoth, walrus, hippopotamus, sperm whale, orca, narwhal and warthog are used as well. Elk also have two ivory teeth, which are believed to be the remnants of tusks from their ancestors. The national and international trade in natural ivory of threatened species such as African and Asian elephants is illegal. The word ivory ultimately derives from the ancient Egyptian âb, âbu ('elephant'), through the Latin ebor- or ebur. | 2001-10-16T20:02:07Z | 2023-12-17T16:10:17Z | [
"Template:Short description",
"Template:Main",
"Template:Cite news",
"Template:ISBN",
"Template:Cite EB1911",
"Template:Rp",
"Template:Circa",
"Template:Cite journal",
"Template:Cite web",
"Template:Shamos 1999",
"Template:Citation",
"Template:Authority control",
"Template:Other uses",
"Template:Lang",
"Template:Reflist",
"Template:Elephants",
"Template:Jewellery",
"Template:Cite book",
"Template:Commons category"
]
| https://en.wikipedia.org/wiki/Ivory |
15,166 | Infantry fighting vehicle | An infantry fighting vehicle (IFV), also known as a mechanized infantry combat vehicle (MICV), is a type of armoured fighting vehicle used to carry infantry into battle and provide direct-fire support. The 1990 Treaty on Conventional Armed Forces in Europe defines an infantry fighting vehicle as "an armoured combat vehicle which is designed and equipped primarily to transport a combat infantry squad, and which is armed with an integral or organic cannon of at least 20 millimeters calibre and sometimes an antitank missile launcher". IFVs often serve both as the principal weapons system and as the mode of transport for a mechanized infantry unit.
Infantry fighting vehicles are distinct from armored personnel carriers (APCs), which are transport vehicles armed only for self-defense and not specifically engineered to fight on their own. IFVs are designed to be more mobile than tanks and are equipped with a rapid-firing autocannon or a large conventional gun; they may include side ports for infantrymen to fire their personal weapons while on board.
The IFV rapidly gained popularity with armies worldwide due to a demand for vehicles with higher firepower than APCs that were less expensive and easier to maintain than tanks. Nevertheless, it did not supersede the APC concept altogether, due to the latter's continued usefulness in specialized roles. Some armies continue to maintain fleets of both IFVs and APCs.
The infantry fighting vehicle (IFV) concept evolved directly out of that of the armored personnel carrier (APC). During the Cold War of 1947-1991 armies increasingly fitted heavier and heavier weapons systems on an APC chassis to deliver suppressive fire for infantry debussing from the vehicle's troop compartment. With the growing mechanization of infantry units worldwide, some armies also came to believe that the embarked personnel should fire their weapons from inside the protection of the APC and only fight on foot as a last resort. These two trends led to the IFV, with firing ports in the troop compartment and a crew-manned weapons system. The IFV established a new niche between those combat vehicles which functioned primarily as armored weapons-carriers or as APCs.
During the 1950s, the Soviet, US, and most European armies had adopted tracked APCs. In 1958, however, the Federal Republic of Germany's newly organized Bundeswehr adopted the Schützenpanzer Lang HS.30 (also known simply as the SPz 12-3), which resembled a conventional tracked APC but carried a turret-mounted 20 mm autocannon that enabled it to engage other armored vehicles. The SPz 12-3 was the first purpose-built IFV.
The Bundeswehr's doctrine called for mounted infantry to fight and maneuver alongside tank formations rather than simply being ferried to the edge of the battlefield before dismounting. Each SPz 12-3 could carry five troops in addition to a three-man crew. Despite this, the design lacked firing ports, forcing the embarked infantry to expose themselves through open hatches to return fire.
As the SPz 12-3 was being inducted into service, the French and Austrian armies adopted new APCs which possessed firing ports, allowing embarked infantry to observe and fire their weapons from inside the vehicle. These were known as the AMX-VCI and Saurer 4K, respectively. Austria subsequently introduced an IFV variant of the Saurer 4K which carried a 20 mm autocannon, making it the first vehicle of this class to possess both firing ports and a turreted weapons-system.
In the early to mid-1960s, the Swedish Army adopted two IFVs armed with 20 mm autocannon turrets and roof firing hatches: Pansarbandvagn 301 and Pansarbandvagn 302, having experimented with the IFV concept already during WWII in the Terrängbil m/42 KP wheeled machine gun armed proto-IFV. Following the trend towards converting preexisting APCs into IFVs, the Dutch, US, and Belgian armies experimented with a variety of modified M113s during the late 1960s; these were collectively identified as the AIFV (Armored Infantry Fighting Vehicle).
The first US M113-based IFV appeared in 1969; known as the XM765, it had a sharply angled hull, ten vision blocks, and a cupola-mounted 20 mm autocannon. The XM765 design, though rejected for service, later became the basis for the very similar Dutch YPR-765. The YPR-765 had five firing ports and a 25 mm autocannon with a co-axial machine gun.
The Soviet Army fielded its first tracked APC, the BTR-50, in 1957. Its first wheeled APC, the BTR-152, had been designed as early as the late 1940s. Early versions of both these lightly armored vehicles were open-topped and carried only general-purpose machine guns for armament. As Soviet strategists became more preoccupied with the possibility of a war involving weapons of mass destruction, they became convinced of the need to deliver mounted troops to a battlefield without exposing them to the radioactive fallout from an atomic weapon.
The IFV concept was received favorably because it would enable a Soviet infantry squad to fight from inside their vehicles when operating in contaminated environments. Soviet design work on a new tracked IFV began in the late 1950s and the first prototype appeared as the Obyekt 765 in 1961. After evaluating and rejecting a number of other wheeled and tracked prototypes, the Soviet Army accepted the Obyekt 765 for service. It entered serial production as the BMP-1 in 1966.
In addition to being amphibious and superior in cross-country mobility to its predecessors, the BMP-1 carried a 73mm smoothbore cannon, a co-axial PKT machine gun, and a launcher for 9M14 Malyutka anti-tank missiles. Its hull had sufficiently heavy armor to resist .50 caliber armor-piercing ammunition along its frontal arc. Eight firing ports and vision blocks allowed the embarked infantry squad to observe and engage targets with rifles or machine guns.
The BMP-1 was heavily armed and armored, combining the qualities of a light tank with those of the traditional APC. Its use of a relatively large caliber main gun marked a departure from the Western trend of fitting IFVs with automatic cannon, which were more suitable for engaging low-flying aircraft, light armor, and dismounted personnel.
The Soviet Union produced about 20,000 BMP-1s from 1966 to 1983, at which time it was considered the most widespread IFVs in the world. In Soviet service, the BMP-1 was ultimately superseded by the more sophisticated BMP-2 (in service from 1980) and by the BMP-3 (in service from 1987). A similar vehicle known as the BMD-1 was designed to accompany Soviet airborne infantry and for a number of years was the world's only airborne IFV.
In 1971 the Bundeswehr adopted the Marder, which became increasingly heavily armored through its successive marks and – like the BMP – was later fitted as standard with a launcher for anti-tank guided missiles. Between 1973 and 1975 the French and Yugoslav armies developed the AMX-10P and BVP M-80, respectively – the first amphibious IFVs to appear outside the Soviet Union. The Marder, AMX-10P, and M-80 were all armed with similar 20 mm autocannon and carried seven to eight passengers. They could also be armed with various anti-tank missile configurations.
Wheeled IFVs did not begin appearing until 1976, when the Ratel was introduced in response to a South African Army specification for a wheeled combat vehicle suited to the demands of rapid offensives combining maximum firepower and strategic mobility. Unlike European IFVs, the Ratel was not designed to allow mounted infantrymen to fight in concert with tanks but rather to operate independently across vast distances. South African officials chose a very simple, economical design because it helped reduce the significant logistical commitment necessary to keep heavier combat vehicles operational in undeveloped areas.
Excessive track wear was also an issue in the region's abrasive, sandy terrain, making the Ratel's wheeled configuration more attractive. The Ratel was typically armed with a 20 mm autocannon featuring what was then a unique twin-linked ammunition feed, allowing its gunner to rapidly switch between armor-piercing or high-explosive ammunition. Other variants were also fitted with mortars, a bank of anti-tank guided missiles, or a 90 mm cannon. Most notably, the Ratel was the first mine-protected IFV; it had a blastproof hull and was built to withstand the explosive force of anti-tank mines favored by local insurgents.
Like the BMP-1, the Ratel proved to be a major watershed in IFV development, albeit for different reasons: until its debut wheeled IFV designs were evaluated unfavorably, since they lacked the weight-carrying capacity and off-road mobility of tracked vehicles, and their wheels were more vulnerable to hostile fire. However, improvements during the 1970s in power trains, suspension technology, and tires had increased their potential strategic mobility. Reduced production, operation, and maintenance costs also helped make wheeled IFVs attractive to several nations.
During the late 1960s and early 1970s, the United States Army had gradually abandoned its attempts to utilize the M113 as an IFV and refocused on creating a dedicated IFV design able to match the BMP. Although considered reliable, the M113 chassis did not meet the necessary requirements for protection or stealth. The US also considered the M113 too heavy and slow to serve as an IFV capable of keeping pace with tanks.
Its MICV-65 program produced a number of unique prototypes, none of which were accepted for service owing to concerns about speed, armor protection, and weight. US Army evaluation staff were sent to Europe to review the AMX-10P and the Marder, both of which were rejected due to high cost, insufficient armor, or lackluster amphibious capabilities.
In 1973, the FMC Corporation developed and tested the XM723, which was a 21-ton tracked chassis which could accommodate three crew members and eight passengers. It initially carried a single 20 mm autocannon in a one-man turret but in 1976 a two-man turret was introduced; this carried a 25 mm autocannon like M242 or Oerlikon KBA, a co-axial machine gun, and a TOW anti-tank missile launcher.
The XM723 possessed amphibious capability, nine firing ports, and spaced laminate armor on its hull. It was accepted for service with the US Army in 1980 as the Bradley Fighting Vehicle. Successive variants have been retrofitted with improved missile systems, gas particulate filter systems, Kevlar spall liners, and increased stowage. The amount of space taken up by the hull and stowage modifications has reduced the number of passengers to six.
By 1982 30,000 IFVs had entered service worldwide, and the IFV concept appeared in the doctrines of 30 national armies. The popularity of the IFV was increased by the growing trend on the part of many nations to mechanize armies previously dominated by light infantry. However, contrary to expectation the IFV did not render APCs obsolete. The US, Russian, French, and German armies have all retained large fleets of IFVs and APCs, finding the APC more suitable for multi-purpose or auxiliary roles.
The British Army was one of the few Western armies which had neither recognized a niche for IFVs nor adopted a dedicated IFV design by the late 1970s. In 1980, it made the decision to adopt a new tracked armored vehicle, the FV510 Warrior. British doctrine is that a vehicle should carry troops under protection to the objective and then give firepower support when they have disembarked. While normally classified as an IFV, the Warrior fills the role of an APC in British service and infantrymen do not remain embarked during combat.
The role of the IFV is closely linked to mechanized infantry doctrine. While some IFVs are armed with a direct fire gun or anti-tank guided missiles for close infantry support, they are not intended to assault armored and mechanized forces with any type of infantry on their own, mounted or not. Rather, the IFV's role is to give an infantry unit battlefield, tactical, and operational mobility during combined arms operations.
Most IFVs either complement tanks as part of an armored battalion, brigade, or division. Others perform traditional infantry missions supported by tanks. Early development of IFVs in a number of Western nations was promoted primarily by armor officers who wanted to integrate tanks with supporting infantry in armored divisions. There were a few exceptions to the rule: for example, the Bundeswehr's decision to adopt the SPz 12-3 was largely due to the experiences of Wehrmacht panzergrenadiers who had been inappropriately ordered to undertake combat operations better suited for armor. Hence, the Bundeswehr concluded that infantry should only fight while mounted in their own armored vehicles, ideally supported by tanks. This doctrinal trend was later subsumed into the armies of other Western nations, including the US, leading to the widespread conclusion that IFVs should be confined largely to assisting the forward momentum of tanks.
The Soviet Army granted more flexibility in this regard to its IFV doctrine, allowing for the mechanized infantry to occupy terrain that compromised an enemy defense, carry out flanking movements, or lure armor into ill-advised counterattacks. While they still performed an auxiliary role to tanks, the notion of using IFVs in these types of engagements dictated that they be heavily armed, which was reflected in the BMP-1 and its successors. Additionally, Soviet airborne doctrine made use of the BMD series of IFVs to operate in concert with paratroops rather than traditional mechanized or armored formations.
IFVs assumed a new significance after the Yom Kippur War. In addition to heralding the combat debut of the BMP-1, that conflict demonstrated the newfound significance of anti-tank guided missiles and the obsolescence of independent armored attacks. More emphasis was placed on combined arms offensives, and the importance of mechanized infantry to support tanks reemerged.
As a result of the Yom Kippur War, the Soviet Union attached more infantry to its armored formations and the US accelerated its long-delayed IFV development program. An IFV capable of accompanying tanks for the purpose of suppressing anti-tank weapons and the hostile infantry which operated them was seen as necessary to avoid the devastation wreaked on purely armored Israeli formations.
The US Army defines all vehicles classed as IFVs as having three essential characteristics: they are armed with at least a medium-caliber cannon or automatic grenade launcher, at least sufficiently protected against small arms fire, and possess off-road mobility. It also identifies all IFVs as having some characteristics of an APC and a light tank.
The United Nations Register for Conventional Arms (UNROCA) simply defines an IFV as any armored vehicle "designed to fight with soldiers on board" and "to accompany tanks". UNROCA makes a clear distinction between IFVs and APCs, as the former's primary mission is combat rather than general transport.
All IFVs possess armored hulls protected against rifle and machine gun fire, and some are equipped with active protection systems. Most have lighter armor than main battle tanks to ensure mobility. Armies have generally accepted risk in reduced protection to recapitalize on an IFV's mobility, weight and speed. Their fully enclosed hulls offer protection from artillery fragments and residual environmental contaminants as well as limit exposure time to the mounted infantry during extended movements over open ground.
Many IFVs also have sharply angled hulls that offer a relatively high degree of protection for their armor thickness. The BMP, Boragh, BVP M-80, and their respective variants all possess steel hulls with a distribution of armor and steep angling that protect them during frontal advances. The BMP-1 was vulnerable to heavy machine guns at close range on its flanks or rear, leading to a variety of more heavily armored marks appearing from 1979 onward.
The Bradley possessed a lightweight aluminum alloy hull, which in most successive marks has been bolstered by the addition of explosive reactive and slat armor, spaced laminate belts, and steel track skirts. Throughout its life cycle, an IFV is expected to gain 30% more weight from armor additions.
As asymmetric conflicts become more common, an increasing concern with regards to IFV protection has been adequate countermeasures against land mines and improvised explosive devices. During the Iraq War, inadequate mine protection in US Bradleys forced their crews to resort to makeshift strategies such as lining the hull floors with sandbags. A few IFVs, such as the Ratel, have been specifically engineered to resist mine explosions.
IFVs may be equipped with: turrets carrying autocannons of various calibers, low or medium velocity tank guns, anti-tank guided missiles, or automatic grenade launchers.
With a few exceptions, such as the BMP-1 and the BMP-3, designs such as the Marder and the BMP-2 have set the trend of arming IFVs with an autocannon suitable for use against lightly armored vehicles, low-flying aircraft, and dismounted infantry. This reflected the growing inclination to view IFVs as auxiliaries of armored formations: a small or medium caliber autocannon was perceived as an ideal suppressive weapon to complement large caliber tank fire. IFVs armed with miniature tank guns did not prove popular because many of the roles they were expected to perform were better performed by accompanying tanks.
The BMP-1, which was the first IFV to carry a relatively large cannon, came under criticism during the Yom Kippur War for its mediocre individual accuracy, due in part to the low velocities of its projectiles. During the Soviet–Afghan War, BMP-1 crews also complained that their armament lacked the elevation necessary to engage insurgents in mountainous terrain. The effectiveness of large caliber, low-velocity guns like the 2A28 Grom on the BMP-1 and BMD-1 was also much reduced by the appearance of Chobham armor on Western tanks.
The Ratel, which included a variant armed with a 90mm low-velocity gun, was utilized in South African combat operations against Angolan and Cuban armored formations during the South African Border War, with mixed results. Although the Ratels succeeded in destroying a large number of Angolan tanks and APCs, they were hampered by many of the same problems as the BMP-1: mediocre standoff ranges, inferior fire control, and a lack of stabilized main gun. The Ratels' heavy armament also tempted South African commanders to utilize them as light tanks rather than in their intended role of infantry support.
Another design feature of the BMP-1 did prove more successful in establishing a precedent for future IFVs: its inclusion of an anti-tank missile system. This consisted of a rail-launcher firing 9M14 Malyutka missiles which had to be reloaded manually from outside the BMP's turret. Crew members had to expose themselves to enemy fire to reload the missiles, and they could not guide them effectively from inside the confines of the turret space.
The BMP-2 and later variants of the BMP-1 made use of semiautonomous guided missile systems. In 1978, the Bundeswehr became the first Western army to embrace this trend when it retrofitted all its Marders with launchers for MILAN anti-tank missiles.
The US Army added a launcher for TOW anti-tank missiles to its fleet of Bradleys, despite the fact that this greatly reduced the interior space available for seating the embarked infantry. This was justified on the basis that the Bradley needed to not only engage and destroy other IFVs, but support tanks in the destruction of other tanks during combined arms operations.
IFVs are designed to have the strategic and tactical mobility necessary to keep pace with tanks during rapid maneuvers. Some, like the BMD series, have airborne and amphibious capabilities. IFVs may be either wheeled or tracked; tracked IFVs are usually more heavily armored and possess greater carrying capacity. Wheeled IFVs are cheaper and simpler to produce, maintain, and operate. From a logistical perspective, they are also ideal for an army without widespread access to transporters or a developed rail network to deploy its armor. | [
{
"paragraph_id": 0,
"text": "An infantry fighting vehicle (IFV), also known as a mechanized infantry combat vehicle (MICV), is a type of armoured fighting vehicle used to carry infantry into battle and provide direct-fire support. The 1990 Treaty on Conventional Armed Forces in Europe defines an infantry fighting vehicle as \"an armoured combat vehicle which is designed and equipped primarily to transport a combat infantry squad, and which is armed with an integral or organic cannon of at least 20 millimeters calibre and sometimes an antitank missile launcher\". IFVs often serve both as the principal weapons system and as the mode of transport for a mechanized infantry unit.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Infantry fighting vehicles are distinct from armored personnel carriers (APCs), which are transport vehicles armed only for self-defense and not specifically engineered to fight on their own. IFVs are designed to be more mobile than tanks and are equipped with a rapid-firing autocannon or a large conventional gun; they may include side ports for infantrymen to fire their personal weapons while on board.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The IFV rapidly gained popularity with armies worldwide due to a demand for vehicles with higher firepower than APCs that were less expensive and easier to maintain than tanks. Nevertheless, it did not supersede the APC concept altogether, due to the latter's continued usefulness in specialized roles. Some armies continue to maintain fleets of both IFVs and APCs.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The infantry fighting vehicle (IFV) concept evolved directly out of that of the armored personnel carrier (APC). During the Cold War of 1947-1991 armies increasingly fitted heavier and heavier weapons systems on an APC chassis to deliver suppressive fire for infantry debussing from the vehicle's troop compartment. With the growing mechanization of infantry units worldwide, some armies also came to believe that the embarked personnel should fire their weapons from inside the protection of the APC and only fight on foot as a last resort. These two trends led to the IFV, with firing ports in the troop compartment and a crew-manned weapons system. The IFV established a new niche between those combat vehicles which functioned primarily as armored weapons-carriers or as APCs.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "During the 1950s, the Soviet, US, and most European armies had adopted tracked APCs. In 1958, however, the Federal Republic of Germany's newly organized Bundeswehr adopted the Schützenpanzer Lang HS.30 (also known simply as the SPz 12-3), which resembled a conventional tracked APC but carried a turret-mounted 20 mm autocannon that enabled it to engage other armored vehicles. The SPz 12-3 was the first purpose-built IFV.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The Bundeswehr's doctrine called for mounted infantry to fight and maneuver alongside tank formations rather than simply being ferried to the edge of the battlefield before dismounting. Each SPz 12-3 could carry five troops in addition to a three-man crew. Despite this, the design lacked firing ports, forcing the embarked infantry to expose themselves through open hatches to return fire.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "As the SPz 12-3 was being inducted into service, the French and Austrian armies adopted new APCs which possessed firing ports, allowing embarked infantry to observe and fire their weapons from inside the vehicle. These were known as the AMX-VCI and Saurer 4K, respectively. Austria subsequently introduced an IFV variant of the Saurer 4K which carried a 20 mm autocannon, making it the first vehicle of this class to possess both firing ports and a turreted weapons-system.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the early to mid-1960s, the Swedish Army adopted two IFVs armed with 20 mm autocannon turrets and roof firing hatches: Pansarbandvagn 301 and Pansarbandvagn 302, having experimented with the IFV concept already during WWII in the Terrängbil m/42 KP wheeled machine gun armed proto-IFV. Following the trend towards converting preexisting APCs into IFVs, the Dutch, US, and Belgian armies experimented with a variety of modified M113s during the late 1960s; these were collectively identified as the AIFV (Armored Infantry Fighting Vehicle).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The first US M113-based IFV appeared in 1969; known as the XM765, it had a sharply angled hull, ten vision blocks, and a cupola-mounted 20 mm autocannon. The XM765 design, though rejected for service, later became the basis for the very similar Dutch YPR-765. The YPR-765 had five firing ports and a 25 mm autocannon with a co-axial machine gun.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Soviet Army fielded its first tracked APC, the BTR-50, in 1957. Its first wheeled APC, the BTR-152, had been designed as early as the late 1940s. Early versions of both these lightly armored vehicles were open-topped and carried only general-purpose machine guns for armament. As Soviet strategists became more preoccupied with the possibility of a war involving weapons of mass destruction, they became convinced of the need to deliver mounted troops to a battlefield without exposing them to the radioactive fallout from an atomic weapon.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The IFV concept was received favorably because it would enable a Soviet infantry squad to fight from inside their vehicles when operating in contaminated environments. Soviet design work on a new tracked IFV began in the late 1950s and the first prototype appeared as the Obyekt 765 in 1961. After evaluating and rejecting a number of other wheeled and tracked prototypes, the Soviet Army accepted the Obyekt 765 for service. It entered serial production as the BMP-1 in 1966.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In addition to being amphibious and superior in cross-country mobility to its predecessors, the BMP-1 carried a 73mm smoothbore cannon, a co-axial PKT machine gun, and a launcher for 9M14 Malyutka anti-tank missiles. Its hull had sufficiently heavy armor to resist .50 caliber armor-piercing ammunition along its frontal arc. Eight firing ports and vision blocks allowed the embarked infantry squad to observe and engage targets with rifles or machine guns.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The BMP-1 was heavily armed and armored, combining the qualities of a light tank with those of the traditional APC. Its use of a relatively large caliber main gun marked a departure from the Western trend of fitting IFVs with automatic cannon, which were more suitable for engaging low-flying aircraft, light armor, and dismounted personnel.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Soviet Union produced about 20,000 BMP-1s from 1966 to 1983, at which time it was considered the most widespread IFVs in the world. In Soviet service, the BMP-1 was ultimately superseded by the more sophisticated BMP-2 (in service from 1980) and by the BMP-3 (in service from 1987). A similar vehicle known as the BMD-1 was designed to accompany Soviet airborne infantry and for a number of years was the world's only airborne IFV.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1971 the Bundeswehr adopted the Marder, which became increasingly heavily armored through its successive marks and – like the BMP – was later fitted as standard with a launcher for anti-tank guided missiles. Between 1973 and 1975 the French and Yugoslav armies developed the AMX-10P and BVP M-80, respectively – the first amphibious IFVs to appear outside the Soviet Union. The Marder, AMX-10P, and M-80 were all armed with similar 20 mm autocannon and carried seven to eight passengers. They could also be armed with various anti-tank missile configurations.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Wheeled IFVs did not begin appearing until 1976, when the Ratel was introduced in response to a South African Army specification for a wheeled combat vehicle suited to the demands of rapid offensives combining maximum firepower and strategic mobility. Unlike European IFVs, the Ratel was not designed to allow mounted infantrymen to fight in concert with tanks but rather to operate independently across vast distances. South African officials chose a very simple, economical design because it helped reduce the significant logistical commitment necessary to keep heavier combat vehicles operational in undeveloped areas.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Excessive track wear was also an issue in the region's abrasive, sandy terrain, making the Ratel's wheeled configuration more attractive. The Ratel was typically armed with a 20 mm autocannon featuring what was then a unique twin-linked ammunition feed, allowing its gunner to rapidly switch between armor-piercing or high-explosive ammunition. Other variants were also fitted with mortars, a bank of anti-tank guided missiles, or a 90 mm cannon. Most notably, the Ratel was the first mine-protected IFV; it had a blastproof hull and was built to withstand the explosive force of anti-tank mines favored by local insurgents.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Like the BMP-1, the Ratel proved to be a major watershed in IFV development, albeit for different reasons: until its debut wheeled IFV designs were evaluated unfavorably, since they lacked the weight-carrying capacity and off-road mobility of tracked vehicles, and their wheels were more vulnerable to hostile fire. However, improvements during the 1970s in power trains, suspension technology, and tires had increased their potential strategic mobility. Reduced production, operation, and maintenance costs also helped make wheeled IFVs attractive to several nations.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "During the late 1960s and early 1970s, the United States Army had gradually abandoned its attempts to utilize the M113 as an IFV and refocused on creating a dedicated IFV design able to match the BMP. Although considered reliable, the M113 chassis did not meet the necessary requirements for protection or stealth. The US also considered the M113 too heavy and slow to serve as an IFV capable of keeping pace with tanks.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Its MICV-65 program produced a number of unique prototypes, none of which were accepted for service owing to concerns about speed, armor protection, and weight. US Army evaluation staff were sent to Europe to review the AMX-10P and the Marder, both of which were rejected due to high cost, insufficient armor, or lackluster amphibious capabilities.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1973, the FMC Corporation developed and tested the XM723, which was a 21-ton tracked chassis which could accommodate three crew members and eight passengers. It initially carried a single 20 mm autocannon in a one-man turret but in 1976 a two-man turret was introduced; this carried a 25 mm autocannon like M242 or Oerlikon KBA, a co-axial machine gun, and a TOW anti-tank missile launcher.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The XM723 possessed amphibious capability, nine firing ports, and spaced laminate armor on its hull. It was accepted for service with the US Army in 1980 as the Bradley Fighting Vehicle. Successive variants have been retrofitted with improved missile systems, gas particulate filter systems, Kevlar spall liners, and increased stowage. The amount of space taken up by the hull and stowage modifications has reduced the number of passengers to six.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "By 1982 30,000 IFVs had entered service worldwide, and the IFV concept appeared in the doctrines of 30 national armies. The popularity of the IFV was increased by the growing trend on the part of many nations to mechanize armies previously dominated by light infantry. However, contrary to expectation the IFV did not render APCs obsolete. The US, Russian, French, and German armies have all retained large fleets of IFVs and APCs, finding the APC more suitable for multi-purpose or auxiliary roles.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The British Army was one of the few Western armies which had neither recognized a niche for IFVs nor adopted a dedicated IFV design by the late 1970s. In 1980, it made the decision to adopt a new tracked armored vehicle, the FV510 Warrior. British doctrine is that a vehicle should carry troops under protection to the objective and then give firepower support when they have disembarked. While normally classified as an IFV, the Warrior fills the role of an APC in British service and infantrymen do not remain embarked during combat.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The role of the IFV is closely linked to mechanized infantry doctrine. While some IFVs are armed with a direct fire gun or anti-tank guided missiles for close infantry support, they are not intended to assault armored and mechanized forces with any type of infantry on their own, mounted or not. Rather, the IFV's role is to give an infantry unit battlefield, tactical, and operational mobility during combined arms operations.",
"title": "Doctrine"
},
{
"paragraph_id": 25,
"text": "Most IFVs either complement tanks as part of an armored battalion, brigade, or division. Others perform traditional infantry missions supported by tanks. Early development of IFVs in a number of Western nations was promoted primarily by armor officers who wanted to integrate tanks with supporting infantry in armored divisions. There were a few exceptions to the rule: for example, the Bundeswehr's decision to adopt the SPz 12-3 was largely due to the experiences of Wehrmacht panzergrenadiers who had been inappropriately ordered to undertake combat operations better suited for armor. Hence, the Bundeswehr concluded that infantry should only fight while mounted in their own armored vehicles, ideally supported by tanks. This doctrinal trend was later subsumed into the armies of other Western nations, including the US, leading to the widespread conclusion that IFVs should be confined largely to assisting the forward momentum of tanks.",
"title": "Doctrine"
},
{
"paragraph_id": 26,
"text": "The Soviet Army granted more flexibility in this regard to its IFV doctrine, allowing for the mechanized infantry to occupy terrain that compromised an enemy defense, carry out flanking movements, or lure armor into ill-advised counterattacks. While they still performed an auxiliary role to tanks, the notion of using IFVs in these types of engagements dictated that they be heavily armed, which was reflected in the BMP-1 and its successors. Additionally, Soviet airborne doctrine made use of the BMD series of IFVs to operate in concert with paratroops rather than traditional mechanized or armored formations.",
"title": "Doctrine"
},
{
"paragraph_id": 27,
"text": "IFVs assumed a new significance after the Yom Kippur War. In addition to heralding the combat debut of the BMP-1, that conflict demonstrated the newfound significance of anti-tank guided missiles and the obsolescence of independent armored attacks. More emphasis was placed on combined arms offensives, and the importance of mechanized infantry to support tanks reemerged.",
"title": "Doctrine"
},
{
"paragraph_id": 28,
"text": "As a result of the Yom Kippur War, the Soviet Union attached more infantry to its armored formations and the US accelerated its long-delayed IFV development program. An IFV capable of accompanying tanks for the purpose of suppressing anti-tank weapons and the hostile infantry which operated them was seen as necessary to avoid the devastation wreaked on purely armored Israeli formations.",
"title": "Doctrine"
},
{
"paragraph_id": 29,
"text": "The US Army defines all vehicles classed as IFVs as having three essential characteristics: they are armed with at least a medium-caliber cannon or automatic grenade launcher, at least sufficiently protected against small arms fire, and possess off-road mobility. It also identifies all IFVs as having some characteristics of an APC and a light tank.",
"title": "Design"
},
{
"paragraph_id": 30,
"text": "The United Nations Register for Conventional Arms (UNROCA) simply defines an IFV as any armored vehicle \"designed to fight with soldiers on board\" and \"to accompany tanks\". UNROCA makes a clear distinction between IFVs and APCs, as the former's primary mission is combat rather than general transport.",
"title": "Design"
},
{
"paragraph_id": 31,
"text": "All IFVs possess armored hulls protected against rifle and machine gun fire, and some are equipped with active protection systems. Most have lighter armor than main battle tanks to ensure mobility. Armies have generally accepted risk in reduced protection to recapitalize on an IFV's mobility, weight and speed. Their fully enclosed hulls offer protection from artillery fragments and residual environmental contaminants as well as limit exposure time to the mounted infantry during extended movements over open ground.",
"title": "Design"
},
{
"paragraph_id": 32,
"text": "Many IFVs also have sharply angled hulls that offer a relatively high degree of protection for their armor thickness. The BMP, Boragh, BVP M-80, and their respective variants all possess steel hulls with a distribution of armor and steep angling that protect them during frontal advances. The BMP-1 was vulnerable to heavy machine guns at close range on its flanks or rear, leading to a variety of more heavily armored marks appearing from 1979 onward.",
"title": "Design"
},
{
"paragraph_id": 33,
"text": "The Bradley possessed a lightweight aluminum alloy hull, which in most successive marks has been bolstered by the addition of explosive reactive and slat armor, spaced laminate belts, and steel track skirts. Throughout its life cycle, an IFV is expected to gain 30% more weight from armor additions.",
"title": "Design"
},
{
"paragraph_id": 34,
"text": "As asymmetric conflicts become more common, an increasing concern with regards to IFV protection has been adequate countermeasures against land mines and improvised explosive devices. During the Iraq War, inadequate mine protection in US Bradleys forced their crews to resort to makeshift strategies such as lining the hull floors with sandbags. A few IFVs, such as the Ratel, have been specifically engineered to resist mine explosions.",
"title": "Design"
},
{
"paragraph_id": 35,
"text": "IFVs may be equipped with: turrets carrying autocannons of various calibers, low or medium velocity tank guns, anti-tank guided missiles, or automatic grenade launchers.",
"title": "Design"
},
{
"paragraph_id": 36,
"text": "With a few exceptions, such as the BMP-1 and the BMP-3, designs such as the Marder and the BMP-2 have set the trend of arming IFVs with an autocannon suitable for use against lightly armored vehicles, low-flying aircraft, and dismounted infantry. This reflected the growing inclination to view IFVs as auxiliaries of armored formations: a small or medium caliber autocannon was perceived as an ideal suppressive weapon to complement large caliber tank fire. IFVs armed with miniature tank guns did not prove popular because many of the roles they were expected to perform were better performed by accompanying tanks.",
"title": "Design"
},
{
"paragraph_id": 37,
"text": "The BMP-1, which was the first IFV to carry a relatively large cannon, came under criticism during the Yom Kippur War for its mediocre individual accuracy, due in part to the low velocities of its projectiles. During the Soviet–Afghan War, BMP-1 crews also complained that their armament lacked the elevation necessary to engage insurgents in mountainous terrain. The effectiveness of large caliber, low-velocity guns like the 2A28 Grom on the BMP-1 and BMD-1 was also much reduced by the appearance of Chobham armor on Western tanks.",
"title": "Design"
},
{
"paragraph_id": 38,
"text": "The Ratel, which included a variant armed with a 90mm low-velocity gun, was utilized in South African combat operations against Angolan and Cuban armored formations during the South African Border War, with mixed results. Although the Ratels succeeded in destroying a large number of Angolan tanks and APCs, they were hampered by many of the same problems as the BMP-1: mediocre standoff ranges, inferior fire control, and a lack of stabilized main gun. The Ratels' heavy armament also tempted South African commanders to utilize them as light tanks rather than in their intended role of infantry support.",
"title": "Design"
},
{
"paragraph_id": 39,
"text": "Another design feature of the BMP-1 did prove more successful in establishing a precedent for future IFVs: its inclusion of an anti-tank missile system. This consisted of a rail-launcher firing 9M14 Malyutka missiles which had to be reloaded manually from outside the BMP's turret. Crew members had to expose themselves to enemy fire to reload the missiles, and they could not guide them effectively from inside the confines of the turret space.",
"title": "Design"
},
{
"paragraph_id": 40,
"text": "The BMP-2 and later variants of the BMP-1 made use of semiautonomous guided missile systems. In 1978, the Bundeswehr became the first Western army to embrace this trend when it retrofitted all its Marders with launchers for MILAN anti-tank missiles.",
"title": "Design"
},
{
"paragraph_id": 41,
"text": "The US Army added a launcher for TOW anti-tank missiles to its fleet of Bradleys, despite the fact that this greatly reduced the interior space available for seating the embarked infantry. This was justified on the basis that the Bradley needed to not only engage and destroy other IFVs, but support tanks in the destruction of other tanks during combined arms operations.",
"title": "Design"
},
{
"paragraph_id": 42,
"text": "IFVs are designed to have the strategic and tactical mobility necessary to keep pace with tanks during rapid maneuvers. Some, like the BMD series, have airborne and amphibious capabilities. IFVs may be either wheeled or tracked; tracked IFVs are usually more heavily armored and possess greater carrying capacity. Wheeled IFVs are cheaper and simpler to produce, maintain, and operate. From a logistical perspective, they are also ideal for an army without widespread access to transporters or a developed rail network to deploy its armor.",
"title": "Design"
}
]
| An infantry fighting vehicle (IFV), also known as a mechanized infantry combat vehicle (MICV), is a type of armoured fighting vehicle used to carry infantry into battle and provide direct-fire support. The 1990 Treaty on Conventional Armed Forces in Europe defines an infantry fighting vehicle as "an armoured combat vehicle which is designed and equipped primarily to transport a combat infantry squad, and which is armed with an integral or organic cannon of at least 20 millimeters calibre and sometimes an antitank missile launcher". IFVs often serve both as the principal weapons system and as the mode of transport for a mechanized infantry unit. Infantry fighting vehicles are distinct from armored personnel carriers (APCs), which are transport vehicles armed only for self-defense and not specifically engineered to fight on their own. IFVs are designed to be more mobile than tanks and are equipped with a rapid-firing autocannon or a large conventional gun; they may include side ports for infantrymen to fire their personal weapons while on board. The IFV rapidly gained popularity with armies worldwide due to a demand for vehicles with higher firepower than APCs that were less expensive and easier to maintain than tanks. Nevertheless, it did not supersede the APC concept altogether, due to the latter's continued usefulness in specialized roles. Some armies continue to maintain fleets of both IFVs and APCs. | 2002-02-25T15:51:15Z | 2023-12-17T21:26:43Z | [
"Template:Cite book",
"Template:Use dmy dates",
"Template:Refn",
"Template:Lang",
"Template:Div col",
"Template:Cite journal",
"Template:Short description",
"Template:Reflist",
"Template:Cite thesis",
"Template:Modern IFV and APC",
"Template:Cite web",
"Template:Authority control",
"Template:Redirect",
"Template:For",
"Template:Div col end"
]
| https://en.wikipedia.org/wiki/Infantry_fighting_vehicle |
15,167 | ICQ | ICQ New is a cross-platform instant messaging (IM) and VoIP client. The name ICQ derives from the English phrase "I Seek You". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group (now VK) in 2010.
The ICQ client application and service were initially released in November 1996, freely available to download. The business did not have traditional marketing and relied mostly on word-of-mouth advertising instead, with customers telling their friends about it, who then informed their friends, and so on. ICQ was among the first stand-alone instant messenger (IM) applications—while real-time chat was not in itself new (Internet Relay Chat (IRC) being the most common platform at the time), the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform.
At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. In 2020, the Mail.Ru Group, which owns ICQ, decided to launch new software, "ICQ New", based on its messenger. The updated messenger was presented to the general public on April 6, 2020. In 2022, ICQ had about 11 million monthly users.
During the second week of January 2021, ICQ saw a renewed increase in popularity in Hong Kong, spurred on by the controversy over WhatsApp's privacy policy update. The number of downloads for the application increased 35-fold in the region. In 2023, an investigation by Brazilian news outlet Núcleo Jornalismo found that ICQ was used to freely share child pornography due to lax moderation policies.
Private chats are a conversation between two users. When logging into an account, the chat can be accessed from any device thanks to cloud synchronization. A user can delete a sent message at any time either in their own chat or in their conversation partner's, and a notification will be received instead indicating that the message has been deleted.
Any important messages from group or private chats, as well as an unlimited number and size of media content, can be sent to the conversation with oneself. Essentially, this chat acts as a free cloud storage.
These are special chats where chats can take place of up to 25 thousand participants at the same time. Any user can create a group. A user can hide their phone number from other participants; there is an advanced polling feature; there is the possibility to see which group members have read a message, and notifications can be switched off for messages from specific group members.
An alternative to blogs. Channel authors can publish posts as text messages and also attach media files. Once the post is published, subscribers receive a notification as they would from regular and group chats. The channel author can remain anonymous and does not have to show any information in the channel description.
A special API-bot is available and can be used by anyone to create a bot, i.e. a small program which performs specific actions and interacts with the user. Bots can be used in a variety of ways ranging from entertainment to business services.
Stickers (small images or photos expressing some form of emotion) are available to make communication via the application more emotive and personalized. Users can use the sticker library already available or upload their own. In addition, thanks to machine learning the software will recommend a sticker during communication by itself.
Masks are images that are superimposed onto the camera in real-time. They can be used during video calls, superimposed onto photos and sent to other users.
A nickname is a name made up by a user. It can replace a phone number when searching for and adding user contact. By using a nickname, users can share their contact details without providing a phone number.
Smart answers are short phrases that appear above the message box which can be used to answer messages. ICQ New analyzes the contents of a conversation and suggests a few pre-set answers.
ICQ New makes it possible to send audio messages. However, for people who do not want to or cannot listen to the audio, the audio can be automatically transcribed into text. All the user needs to do is click the relevant button and they will see the message in text form.
Aside from text messaging, users can call each other as well as arrange audio or video calls for up to five people. During the video call, AR-masks can be used.
ICQ users are identified and distinguished from one another by UIN, or User Identification Numbers, distributed in sequential order. The UIN was invented by Mirabilis, as the user name assigned to each user upon registration. Issued UINs started at '10,000' (5 digits) and every user receives a UIN when first registering with ICQ. As of ICQ6 users are also able to log in using the specific e-mail address they associated with their UIN during the initial registration process. Unlike other instant messaging software or web applications, on ICQ the only permanent user info is the UIN, although it is possible to search for other users using their associated e-mail address or any other detail they have made public by updating it in their account's public profile. In addition the user can change all of his or her personal information, including screen name and e-mail address, without having to re-register. Since 2000 ICQ and AIM users were able to add each other to their contact list without the need for any external clients. (The AIM service has since been discontinued.) As a response to UIN theft or sale of attractive UINs, ICQ started to store email addresses previously associated with a UIN. As such UINs that are stolen can sometimes be reclaimed. This applies only if (since 1999 onwards) a valid primary email address was entered into the user profile.
The founding company of ICQ, Mirabilis, was established in June 1996 by five Israeli developers: Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father Yossi Vardi. ICQ was one of the first text-based messengers to reach a wide range of users.
The technology Mirabilis developed for ICQ was distributed free of charge. The technology's success encouraged AOL to acquire Mirabilis on June 8, 1998, for $287 million up front and $120 million in additional payments over three years based on performance levels. In 2002 AOL successfully patented the technology.
After the purchase the product was initially managed by Ariel Yarnitsky and Avi Shechter. ICQ's management changed at the end of 2003. Under the leadership of the new CEO, Orey Gilliam, who also assumed the responsibility for all of AOL's messaging business in 2007, ICQ resumed its growth; it was not only a highly profitable company, but one of AOL's most successful businesses. Eliav Moshe replaced Gilliam in 2009 and became ICQ's managing director.
In April 2010, AOL sold ICQ to Digital Sky Technologies, headed by Alisher Usmanov, for $187.5 million. While ICQ was displaced by AOL Instant Messenger, Google Talk, and other competitors in the U.S. and many other countries over the 2000s, it remained the most popular instant messaging network in Russian-speaking countries, and an important part of online culture. Popular UINs demanded over 11,000₽ in 2010.
In September of that year, Digital Sky Technologies changed its name to Mail.Ru Group. Since the acquisition, Mail.ru has invested in turning ICQ from a desktop client to a mobile messaging system. As of 2013, around half of ICQ's users were using its mobile apps, and in 2014, the number of users began growing for the first time since the purchase.
In March 2016, the source code of the client was released under the Apache license on github.com.
AOL pursued an aggressive policy regarding alternative ("unauthorized") ICQ clients.
"Системное сообщение
System Message
On icq.com there was an "important message" for Russian-speaking ICQ users: "ICQ осуществляет поддержку только авторизированных версий программ: ICQ Lite и ICQ 6.5." ("ICQ supports only authorized versions of programs: ICQ Lite and ICQ 6.5.")
According to a Novaya Gazeta article published in May 2018, Russian intelligence agencies had access to online reading of ICQ users' correspondence during crime investigations. The article examined 34 sentences of Russian courts, during the investigation of which the evidence of the defendants' guilt was obtained by reading correspondence on a PC or mobile devices. Of the fourteen cases in which ICQ was involved, in six cases the capturing of information occurred before the seizure of the device. Because the rival service Telegram blocks all access for the agencies, the Advisor of the Russian President, Herman Klimenko, recommended to use ICQ instead.
AOL's OSCAR network protocol used by ICQ is proprietary and using a third party client is a violation of ICQ Terms of Service. Nevertheless, a number of third-party clients have been created by using reverse-engineering and protocol descriptions. These clients include:
AOL supported clients include: | [
{
"paragraph_id": 0,
"text": "ICQ New is a cross-platform instant messaging (IM) and VoIP client. The name ICQ derives from the English phrase \"I Seek You\". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group (now VK) in 2010.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The ICQ client application and service were initially released in November 1996, freely available to download. The business did not have traditional marketing and relied mostly on word-of-mouth advertising instead, with customers telling their friends about it, who then informed their friends, and so on. ICQ was among the first stand-alone instant messenger (IM) applications—while real-time chat was not in itself new (Internet Relay Chat (IRC) being the most common platform at the time), the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. In 2020, the Mail.Ru Group, which owns ICQ, decided to launch new software, \"ICQ New\", based on its messenger. The updated messenger was presented to the general public on April 6, 2020. In 2022, ICQ had about 11 million monthly users.",
"title": ""
},
{
"paragraph_id": 3,
"text": "During the second week of January 2021, ICQ saw a renewed increase in popularity in Hong Kong, spurred on by the controversy over WhatsApp's privacy policy update. The number of downloads for the application increased 35-fold in the region. In 2023, an investigation by Brazilian news outlet Núcleo Jornalismo found that ICQ was used to freely share child pornography due to lax moderation policies.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Private chats are a conversation between two users. When logging into an account, the chat can be accessed from any device thanks to cloud synchronization. A user can delete a sent message at any time either in their own chat or in their conversation partner's, and a notification will be received instead indicating that the message has been deleted.",
"title": "Features"
},
{
"paragraph_id": 5,
"text": "Any important messages from group or private chats, as well as an unlimited number and size of media content, can be sent to the conversation with oneself. Essentially, this chat acts as a free cloud storage.",
"title": "Features"
},
{
"paragraph_id": 6,
"text": "These are special chats where chats can take place of up to 25 thousand participants at the same time. Any user can create a group. A user can hide their phone number from other participants; there is an advanced polling feature; there is the possibility to see which group members have read a message, and notifications can be switched off for messages from specific group members.",
"title": "Features"
},
{
"paragraph_id": 7,
"text": "An alternative to blogs. Channel authors can publish posts as text messages and also attach media files. Once the post is published, subscribers receive a notification as they would from regular and group chats. The channel author can remain anonymous and does not have to show any information in the channel description.",
"title": "Features"
},
{
"paragraph_id": 8,
"text": "A special API-bot is available and can be used by anyone to create a bot, i.e. a small program which performs specific actions and interacts with the user. Bots can be used in a variety of ways ranging from entertainment to business services.",
"title": "Features"
},
{
"paragraph_id": 9,
"text": "Stickers (small images or photos expressing some form of emotion) are available to make communication via the application more emotive and personalized. Users can use the sticker library already available or upload their own. In addition, thanks to machine learning the software will recommend a sticker during communication by itself.",
"title": "Features"
},
{
"paragraph_id": 10,
"text": "Masks are images that are superimposed onto the camera in real-time. They can be used during video calls, superimposed onto photos and sent to other users.",
"title": "Features"
},
{
"paragraph_id": 11,
"text": "A nickname is a name made up by a user. It can replace a phone number when searching for and adding user contact. By using a nickname, users can share their contact details without providing a phone number.",
"title": "Features"
},
{
"paragraph_id": 12,
"text": "Smart answers are short phrases that appear above the message box which can be used to answer messages. ICQ New analyzes the contents of a conversation and suggests a few pre-set answers.",
"title": "Features"
},
{
"paragraph_id": 13,
"text": "ICQ New makes it possible to send audio messages. However, for people who do not want to or cannot listen to the audio, the audio can be automatically transcribed into text. All the user needs to do is click the relevant button and they will see the message in text form.",
"title": "Features"
},
{
"paragraph_id": 14,
"text": "Aside from text messaging, users can call each other as well as arrange audio or video calls for up to five people. During the video call, AR-masks can be used.",
"title": "Features"
},
{
"paragraph_id": 15,
"text": "ICQ users are identified and distinguished from one another by UIN, or User Identification Numbers, distributed in sequential order. The UIN was invented by Mirabilis, as the user name assigned to each user upon registration. Issued UINs started at '10,000' (5 digits) and every user receives a UIN when first registering with ICQ. As of ICQ6 users are also able to log in using the specific e-mail address they associated with their UIN during the initial registration process. Unlike other instant messaging software or web applications, on ICQ the only permanent user info is the UIN, although it is possible to search for other users using their associated e-mail address or any other detail they have made public by updating it in their account's public profile. In addition the user can change all of his or her personal information, including screen name and e-mail address, without having to re-register. Since 2000 ICQ and AIM users were able to add each other to their contact list without the need for any external clients. (The AIM service has since been discontinued.) As a response to UIN theft or sale of attractive UINs, ICQ started to store email addresses previously associated with a UIN. As such UINs that are stolen can sometimes be reclaimed. This applies only if (since 1999 onwards) a valid primary email address was entered into the user profile.",
"title": "UIN"
},
{
"paragraph_id": 16,
"text": "The founding company of ICQ, Mirabilis, was established in June 1996 by five Israeli developers: Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father Yossi Vardi. ICQ was one of the first text-based messengers to reach a wide range of users.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The technology Mirabilis developed for ICQ was distributed free of charge. The technology's success encouraged AOL to acquire Mirabilis on June 8, 1998, for $287 million up front and $120 million in additional payments over three years based on performance levels. In 2002 AOL successfully patented the technology.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "After the purchase the product was initially managed by Ariel Yarnitsky and Avi Shechter. ICQ's management changed at the end of 2003. Under the leadership of the new CEO, Orey Gilliam, who also assumed the responsibility for all of AOL's messaging business in 2007, ICQ resumed its growth; it was not only a highly profitable company, but one of AOL's most successful businesses. Eliav Moshe replaced Gilliam in 2009 and became ICQ's managing director.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In April 2010, AOL sold ICQ to Digital Sky Technologies, headed by Alisher Usmanov, for $187.5 million. While ICQ was displaced by AOL Instant Messenger, Google Talk, and other competitors in the U.S. and many other countries over the 2000s, it remained the most popular instant messaging network in Russian-speaking countries, and an important part of online culture. Popular UINs demanded over 11,000₽ in 2010.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In September of that year, Digital Sky Technologies changed its name to Mail.Ru Group. Since the acquisition, Mail.ru has invested in turning ICQ from a desktop client to a mobile messaging system. As of 2013, around half of ICQ's users were using its mobile apps, and in 2014, the number of users began growing for the first time since the purchase.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In March 2016, the source code of the client was released under the Apache license on github.com.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "AOL pursued an aggressive policy regarding alternative (\"unauthorized\") ICQ clients.",
"title": "Criticism"
},
{
"paragraph_id": 23,
"text": "\"Системное сообщение",
"title": "Criticism"
},
{
"paragraph_id": 24,
"text": "System Message",
"title": "Criticism"
},
{
"paragraph_id": 25,
"text": "On icq.com there was an \"important message\" for Russian-speaking ICQ users: \"ICQ осуществляет поддержку только авторизированных версий программ: ICQ Lite и ICQ 6.5.\" (\"ICQ supports only authorized versions of programs: ICQ Lite and ICQ 6.5.\")",
"title": "Criticism"
},
{
"paragraph_id": 26,
"text": "According to a Novaya Gazeta article published in May 2018, Russian intelligence agencies had access to online reading of ICQ users' correspondence during crime investigations. The article examined 34 sentences of Russian courts, during the investigation of which the evidence of the defendants' guilt was obtained by reading correspondence on a PC or mobile devices. Of the fourteen cases in which ICQ was involved, in six cases the capturing of information occurred before the seizure of the device. Because the rival service Telegram blocks all access for the agencies, the Advisor of the Russian President, Herman Klimenko, recommended to use ICQ instead.",
"title": "Criticism"
},
{
"paragraph_id": 27,
"text": "AOL's OSCAR network protocol used by ICQ is proprietary and using a third party client is a violation of ICQ Terms of Service. Nevertheless, a number of third-party clients have been created by using reverse-engineering and protocol descriptions. These clients include:",
"title": "Clients"
},
{
"paragraph_id": 28,
"text": "AOL supported clients include:",
"title": "Clients"
}
]
| ICQ New is a cross-platform instant messaging (IM) and VoIP client. The name ICQ derives from the English phrase "I Seek You". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group in 2010. The ICQ client application and service were initially released in November 1996, freely available to download. The business did not have traditional marketing and relied mostly on word-of-mouth advertising instead, with customers telling their friends about it, who then informed their friends, and so on. ICQ was among the first stand-alone instant messenger (IM) applications—while real-time chat was not in itself new, the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform. At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. In 2020, the Mail.Ru Group, which owns ICQ, decided to launch new software, "ICQ New", based on its messenger. The updated messenger was presented to the general public on April 6, 2020. In 2022, ICQ had about 11 million monthly users. During the second week of January 2021, ICQ saw a renewed increase in popularity in Hong Kong, spurred on by the controversy over WhatsApp's privacy policy update. The number of downloads for the application increased 35-fold in the region. In 2023, an investigation by Brazilian news outlet Núcleo Jornalismo found that ICQ was used to freely share child pornography due to lax moderation policies. | 2001-10-17T19:43:34Z | 2023-12-17T10:02:29Z | [
"Template:Authority control",
"Template:Infobox software",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Commons category",
"Template:Instant messaging",
"Template:Short description",
"Template:Other uses",
"Template:Cite press release",
"Template:Citation",
"Template:Cite patent",
"Template:Cite magazine"
]
| https://en.wikipedia.org/wiki/ICQ |
15,169 | Impressionism | Impressionism was a 19th-century art movement characterized by relatively small, thin, yet visible brush strokes, open composition, emphasis on accurate depiction of light in its changing qualities (often accentuating the effects of the passage of time), ordinary subject matter, unusual visual angles, and inclusion of movement as a crucial element of human perception and experience. Impressionism originated with a group of Paris-based artists whose independent exhibitions brought them to prominence during the 1870s and 1880s.
The Impressionists faced harsh opposition from the conventional art community in France. The name of the style derives from the title of a Claude Monet work, Impression, soleil levant (Impression, Sunrise), which provoked the critic Louis Leroy to coin the term in a satirical 1874 review published in the Parisian newspaper Le Charivari. The development of Impressionism in the visual arts was soon followed by analogous styles in other media that became known as impressionist music and impressionist literature.
Radicals in their time, early Impressionists violated the rules of academic painting. They constructed their pictures from freely brushed colours that took precedence over lines and contours, following the example of painters such as Eugène Delacroix and J. M. W. Turner. They also painted realistic scenes of modern life, and often painted outdoors. Previously, still lifes and portraits as well as landscapes were usually painted in a studio. The Impressionists found that they could capture the momentary and transient effects of sunlight by painting outdoors or en plein air. They portrayed overall visual effects instead of details, and used short "broken" brush strokes of mixed and pure unmixed colour—not blended smoothly or shaded, as was customary—to achieve an effect of intense colour vibration.
Impressionism emerged in France at the same time that a number of other painters, including the Italian artists known as the Macchiaioli, and Winslow Homer in the United States, were also exploring plein-air painting. The Impressionists, however, developed new techniques specific to the style. Encompassing what its adherents argued was a different way of seeing, it is an art of immediacy and movement, of candid poses and compositions, of the play of light expressed in a bright and varied use of colour.
The public, at first hostile, gradually came to believe that the Impressionists had captured a fresh and original vision, even if the art critics and art establishment disapproved of the new style. By recreating the sensation in the eye that views the subject, rather than delineating the details of the subject, and by creating a welter of techniques and forms, Impressionism is a precursor of various painting styles, including Neo-Impressionism, Post-Impressionism, Fauvism, and Cubism.
In the middle of the 19th century—a time of rapid industrialization and unsettling social change in France, as Emperor Napoleon III rebuilt Paris and waged war—the Académie des Beaux-Arts dominated French art. The Académie was the preserver of traditional French painting standards of content and style. Historical subjects, religious themes, and portraits were valued; landscape and still life were not. The Académie preferred carefully finished images that looked realistic when examined closely. Paintings in this style were made up of precise brush strokes carefully blended to hide the artist's hand in the work. Colour was restrained and often toned down further by the application of a golden varnish.
The Académie had an annual, juried art show, the Salon de Paris, and artists whose work was displayed in the show won prizes, garnered commissions, and enhanced their prestige. The standards of the juries represented the values of the Académie, represented by the works of such artists as Jean-Léon Gérôme and Alexandre Cabanel. Using an eclectic mix of techniques and formulas established in Western painting since the Renaissance—such as linear perspective and figure types derived from Classical Greek art—these artists produced escapist visions of a reassuringly ordered world. By the 1850s, some artists, notably the Realist painter Gustave Courbet, had gained public attention and critical censure by depicting contemporary realities without the idealization demanded by the Académie.
In the early 1860s, four young painters—Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, and Frédéric Bazille—met while studying under the academic artist Charles Gleyre. They discovered that they shared an interest in painting landscape and contemporary life rather than historical or mythological scenes. Following a practice—pioneered by artists such as the Englishman John Constable— that had become increasingly popular by mid-century, they often ventured into the countryside together to paint in the open air. Their purpose was not to make sketches to be developed into carefully finished works in the studio, as was the usual custom, but to complete their paintings out-of-doors. By painting in sunlight directly from nature, and making bold use of the vivid synthetic pigments that had become available since the beginning of the century, they began to develop a lighter and brighter manner of painting that extended further the Realism of Courbet and the Barbizon school. A favourite meeting place for the artists was the Café Guerbois on Avenue de Clichy in Paris, where the discussions were often led by Édouard Manet, whom the younger artists greatly admired. They were soon joined by Camille Pissarro, Paul Cézanne, and Armand Guillaumin.
During the 1860s, the Salon jury routinely rejected about half of the works submitted by Monet and his friends in favour of works by artists faithful to the approved style. In 1863, the Salon jury rejected Manet's The Luncheon on the Grass (Le déjeuner sur l'herbe) primarily because it depicted a nude woman with two clothed men at a picnic. While the Salon jury routinely accepted nudes in historical and allegorical paintings, they condemned Manet for placing a realistic nude in a contemporary setting. The jury's severely worded rejection of Manet's painting appalled his admirers, and the unusually large number of rejected works that year perturbed many French artists.
After Emperor Napoleon III saw the rejected works of 1863, he decreed that the public be allowed to judge the work themselves, and the Salon des Refusés (Salon of the Refused) was organized. While many viewers came only to laugh, the Salon des Refusés drew attention to the existence of a new tendency in art and attracted more visitors than the regular Salon.
Artists' petitions requesting a new Salon des Refusés in 1867, and again in 1872, were denied. In December 1873, Monet, Renoir, Pissarro, Sisley, Cézanne, Berthe Morisot, Edgar Degas and several other artists founded the Société Anonyme Coopérative des Artistes Peintres, Sculpteurs, Graveurs ("Company of Painters, Sculptors, and Engravers") to exhibit their artworks independently. Members of the association were expected to forswear participation in the Salon. The organizers invited a number of other progressive artists to join them in their inaugural exhibition, including the older Eugène Boudin, whose example had first persuaded Monet to adopt plein air painting years before. Another painter who greatly influenced Monet and his friends, Johan Jongkind, declined to participate, as did Édouard Manet. In total, thirty artists participated in their first exhibition, held in April 1874 at the studio of the photographer Nadar.
The critical response was mixed. Monet and Cézanne received the harshest attacks. Critic and humorist Louis Leroy wrote a scathing review in the newspaper Le Charivari in which, making wordplay with the title of Claude Monet's Impression, Sunrise (Impression, soleil levant), he gave the artists the name by which they became known. Derisively titling his article "The Exhibition of the Impressionists", Leroy declared that Monet's painting was at most, a sketch, and could hardly be termed a finished work.
He wrote, in the form of a dialogue between viewers,
The term Impressionist quickly gained favour with the public. It was also accepted by the artists themselves, even though they were a diverse group in style and temperament, unified primarily by their spirit of independence and rebellion. They exhibited together—albeit with shifting membership—eight times between 1874 and 1886. The Impressionists' style, with its loose, spontaneous brushstrokes, would soon become synonymous with modern life.
Monet, Sisley, Morisot, and Pissarro may be considered the "purest" Impressionists, in their consistent pursuit of an art of spontaneity, sunlight, and colour. Degas rejected much of this, as he believed in the primacy of drawing over colour and belittled the practice of painting outdoors. Renoir turned away from Impressionism for a time during the 1880s, and never entirely regained his commitment to its ideas. Édouard Manet, although regarded by the Impressionists as their leader, never abandoned his liberal use of black as a colour (while Impressionists avoided its use and preferred to obtain darker colours by mixing), and never participated in the Impressionist exhibitions. He continued to submit his works to the Salon, where his painting Spanish Singer had won a 2nd class medal in 1861, and he urged the others to do likewise, arguing that "the Salon is the real field of battle" where a reputation could be made.
Among the artists of the core group (minus Bazille, who had died in the Franco-Prussian War in 1870), defections occurred as Cézanne, followed later by Renoir, Sisley, and Monet, abstained from the group exhibitions so they could submit their works to the Salon. Disagreements arose from issues such as Guillaumin's membership in the group, championed by Pissarro and Cézanne against opposition from Monet and Degas, who thought him unworthy. Degas invited Mary Cassatt to display her work in the 1879 exhibition, but also insisted on the inclusion of Jean-François Raffaëlli, Ludovic Lepic, and other realists who did not represent Impressionist practices, causing Monet in 1880 to accuse the Impressionists of "opening doors to first-come daubers". In this regard, the seventh Paris Impressionist exhibition in 1882 was the most selective of all including the works of only nine "true" impressionists, namely Gustave Caillebotte, Paul Gauguin, Armand Guillaumin, Claude Monet, Berthe Morisot, Camille Pissarro, Pierre-Auguste Renoir, Alfred Sisley, and Victor Vignon. The group then divided again over the invitations to Paul Signac and Georges Seurat to exhibit with them at the 8th Impressionist exhibition in 1886. Pissarro was the only artist to show at all eight Paris Impressionist exhibitions.
The individual artists achieved few financial rewards from the Impressionist exhibitions, but their art gradually won a degree of public acceptance and support. Their dealer, Durand-Ruel, played a major role in this as he kept their work before the public and arranged shows for them in London and New York. Although Sisley died in poverty in 1899, Renoir had a great Salon success in 1879. Monet became secure financially during the early 1880s and so did Pissarro by the early 1890s. By this time the methods of Impressionist painting, in a diluted form, had become commonplace in Salon art.
French painters who prepared the way for Impressionism include the Romantic colourist Eugène Delacroix; the leader of the realists, Gustave Courbet; and painters of the Barbizon school such as Théodore Rousseau. The Impressionists learned much from the work of Johan Barthold Jongkind, Jean-Baptiste-Camille Corot and Eugène Boudin, who painted from nature in a direct and spontaneous style that prefigured Impressionism, and who befriended and advised the younger artists.
A number of identifiable techniques and working habits contributed to the innovative style of the Impressionists. Although these methods had been used by previous artists—and are often conspicuous in the work of artists such as Frans Hals, Diego Velázquez, Peter Paul Rubens, John Constable, and J. M. W. Turner—the Impressionists were the first to use them all together, and with such consistency. These techniques include:
New technology played a role in the development of the style. Impressionists took advantage of the mid-century introduction of premixed paints in tin tubes (resembling modern toothpaste tubes), which allowed artists to work more spontaneously, both outdoors and indoors. Previously, painters made their own paints individually, by grinding and mixing dry pigment powders with linseed oil, which were then stored in animal bladders.
Many vivid synthetic pigments became commercially available to artists for the first time during the 19th century. These included cobalt blue, viridian, cadmium yellow, and synthetic ultramarine blue, all of which were in use by the 1840s, before Impressionism. The Impressionists' manner of painting made bold use of these pigments, and of even newer colours such as cerulean blue, which became commercially available to artists in the 1860s.
The Impressionists' progress toward a brighter style of painting was gradual. During the 1860s, Monet and Renoir sometimes painted on canvases prepared with the traditional red-brown or grey ground. By the 1870s, Monet, Renoir, and Pissarro usually chose to paint on grounds of a lighter grey or beige colour, which functioned as a middle tone in the finished painting. By the 1880s, some of the Impressionists had come to prefer white or slightly off-white grounds, and no longer allowed the ground colour a significant role in the finished painting.
The Impressionists reacted to modernity by exploring "a wide range of non-academic subjects in art" such as middle-class leisure activities and "urban themes, including train stations, cafés, brothels, the theater, and dance." They found inspiration in the newly widened avenues of Paris, bounded by new tall buildings that offered opportunities to depict bustling crowds, popular entertainments, and nocturnal lighting in artificially closed-off spaces. A painting such as Caillebotte's Paris Street; Rainy Day (1877) strikes a modern note by emphasizing the isolation of individuals amid the outsized buildings and spaces of the urban environment. When painting landscapes, the Impressionists did not hesitate to include the factories that were proliferating in the countryside. Earlier painters of landscapes had conventionally avoided smokestacks and other signs of industrialization, regarding them as blights on nature's order and unworthy of art.
Prior to the Impressionists, other painters, notably such 17th-century Dutch painters as Jan Steen, had emphasized common subjects, but their methods of composition were traditional. They arranged their compositions so that the main subject commanded the viewer's attention. J. M. W. Turner, while an artist of the Romantic era, anticipated the style of impressionism with his artwork. The Impressionists relaxed the boundary between subject and background so that the effect of an Impressionist painting often resembles a snapshot, a part of a larger reality captured as if by chance. Photography was gaining popularity, and as cameras became more portable, photographs became more candid. Photography inspired Impressionists to represent momentary action, not only in the fleeting lights of a landscape, but in the day-to-day lives of people.
The development of Impressionism can be considered partly as a reaction by artists to the challenge presented by photography, which seemed to devalue the artist's skill in reproducing reality. Both portrait and landscape paintings were deemed somewhat deficient and lacking in truth as photography "produced lifelike images much more efficiently and reliably".
In spite of this, photography actually inspired artists to pursue other means of creative expression, and rather than compete with photography to emulate reality, artists focused "on the one thing they could inevitably do better than the photograph—by further developing into an art form its very subjectivity in the conception of the image, the very subjectivity that photography eliminated". The Impressionists sought to express their perceptions of nature, rather than create exact representations. This allowed artists to depict subjectively what they saw with their "tacit imperatives of taste and conscience". Photography encouraged painters to exploit aspects of the painting medium, like colour, which photography then lacked: "The Impressionists were the first to consciously offer a subjective alternative to the photograph".
Another major influence was Japanese ukiyo-e art prints (Japonism). The art of these prints contributed significantly to the "snapshot" angles and unconventional compositions that became characteristic of Impressionism. An example is Monet's Jardin à Sainte-Adresse, 1867, with its bold blocks of colour and composition on a strong diagonal slant showing the influence of Japanese prints.
Edgar Degas was both an avid photographer and a collector of Japanese prints. His The Dance Class (La classe de danse) of 1874 shows both influences in its asymmetrical composition. The dancers are seemingly caught off guard in various awkward poses, leaving an expanse of empty floor space in the lower right quadrant. He also captured his dancers in sculpture, such as the Little Dancer of Fourteen Years.
Impressionists, in varying degrees, were looking for ways to depict visual experience and contemporary subjects. Female Impressionists were interested in these same ideals but had many social and career limitations compared to male Impressionists. They were particularly excluded from the imagery of the bourgeois social sphere of the boulevard, cafe, and dance hall. As well as imagery, women were excluded from the formative discussions that resulted in meetings in those places; that was where male Impressionists were able to form and share ideas about Impressionism. In the academic realm, women were believed to be incapable of handling complex subjects which led teachers to restrict what they taught female students. It was also considered unladylike to excel in art since women's true talents were then believed to center on homemaking and mothering.
Yet several women were able to find success during their lifetime, even though their careers were affected by personal circumstances – Bracquemond, for example, had a husband who was resentful of her work which caused her to give up painting. The four most well known, namely, Mary Cassatt, Eva Gonzalès, Marie Bracquemond, and Berthe Morisot, are, and were, often referred to as the 'Women Impressionists'. Their participation in the series of eight Impressionist exhibitions that took place in Paris from 1874 to 1886 varied: Morisot participated in seven, Cassatt in four, Bracquemond in three, and Gonzalès did not participate.
The critics of the time lumped these four together without regard to their personal styles, techniques, or subject matter. Critics viewing their works at the exhibitions often attempted to acknowledge the women artists' talents but circumscribed them within a limited notion of femininity. Arguing for the suitability of Impressionist technique to women's manner of perception, Parisian critic S.C. de Soissons wrote:
One can understand that women have no originality of thought, and that literature and music have no feminine character; but surely women know how to observe, and what they see is quite different from that which men see, and the art which they put in their gestures, in their toilet, in the decoration of their environment is sufficient to give is the idea of an instinctive, of a peculiar genius which resides in each one of them.
While Impressionism legitimized the domestic social life as subject matter, of which women had intimate knowledge, it also tended to limit them to that subject matter. Portrayals of often-identifiable sitters in domestic settings (which could offer commissions) were dominant in the exhibitions. The subjects of the paintings were often women interacting with their environment by either their gaze or movement. Cassatt, in particular, was aware of her placement of subjects: she kept her predominantly female figures from objectification and cliche; when they are not reading, they converse, sew, drink tea, and when they are inactive, they seem lost in thought.
The women Impressionists, like their male counterparts, were striving for "truth," for new ways of seeing and new painting techniques; each artist had an individual painting style. Women Impressionists (particularly Morisot and Cassatt) were conscious of the balance of power between women and objects in their paintings – the bourgeois women depicted are not defined by decorative objects, but instead, interact with and dominate the things with which they live. There are many similarities in their depictions of women who seem both at ease and subtly confined. Gonzalès' Box at the Italian Opera depicts a woman staring into the distance, at ease in a social sphere but confined by the box and the man standing next to her. Cassatt's painting Young Girl at a Window is brighter in color but remains constrained by the canvas edge as she looks out the window.
Despite their success in their ability to have a career and Impressionism's demise attributed to its allegedly feminine characteristics (its sensuality, dependence on sensation, physicality, and fluidity) the four women artists (and other, lesser-known women Impressionists) were largely omitted from art historical textbooks covering Impressionist artists until Tamar Garb's Women Impressionists published in 1986. For example, Impressionism by Jean Leymarie, published in 1955 included no information on any women Impressionists.
Painter Androniqi Zengo Antoniu is co-credited with the introduction of impressionism to Albania.
The central figures in the development of Impressionism in France, listed alphabetically, were:
The Impressionists
Among the close associates of the Impressionists, Victor Vignon is the only artist outside the group of prominent names who participated to the most exclusive Seventh Paris Impressionist Exhibition in 1882, which was indeed a rejection to the previous less restricted exhibitions chiefly organized by Degas. Originally from the school of Corot, Vignon was a friend of Camille Pissarro, whose influence is evident in his impressionist style after the late 1870s, and a friend of post-impressionist Vincent van Gogh.
There were several other close associates of the Impressionists who adopted their methods to some degree. These include Jean-Louis Forain (who participated in Impressionist exhibitions in 1879, 1880, 1881 and 1886) and Giuseppe De Nittis, an Italian artist living in Paris who participated in the first Impressionist exhibit at the invitation of Degas, although the other Impressionists disparaged his work. Federico Zandomeneghi was another Italian friend of Degas who showed with the Impressionists. Eva Gonzalès was a follower of Manet who did not exhibit with the group. James Abbott McNeill Whistler was an American-born painter who played a part in Impressionism although he did not join the group and preferred grayed colours. Walter Sickert, an English artist, was initially a follower of Whistler, and later an important disciple of Degas; he did not exhibit with the Impressionists. In 1904 the artist and writer Wynford Dewhurst wrote the first important study of the French painters published in English, Impressionist Painting: its genesis and development, which did much to popularize Impressionism in Great Britain.
By the early 1880s, Impressionist methods were affecting, at least superficially, the art of the Salon. Fashionable painters such as Jean Béraud and Henri Gervex found critical and financial success by brightening their palettes while retaining the smooth finish expected of Salon art. Works by these artists are sometimes casually referred to as Impressionism, despite their remoteness from Impressionist practice.
The influence of the French Impressionists lasted long after most of them had died. Artists like J.D. Kirszenbaum were borrowing Impressionist techniques throughout the twentieth century.
As the influence of Impressionism spread beyond France, artists, too numerous to list, became identified as practitioners of the new style. Some of the more important examples are:
The sculptor Auguste Rodin is sometimes called an Impressionist for the way he used roughly modeled surfaces to suggest transient light effects.
Pictorialist photographers whose work is characterized by soft focus and atmospheric effects have also been called Impressionists.
French Impressionist Cinema is a term applied to a loosely defined group of films and filmmakers in France from 1919 to 1929, although these years are debatable. French Impressionist filmmakers include Abel Gance, Jean Epstein, Germaine Dulac, Marcel L’Herbier, Louis Delluc, and Dmitry Kirsanoff.
Musical Impressionism is the name given to a movement in European classical music that arose in the late 19th century and continued into the middle of the 20th century. Originating in France, musical Impressionism is characterized by suggestion and atmosphere, and eschews the emotional excesses of the Romantic era. Impressionist composers favoured short forms such as the nocturne, arabesque, and prelude, and often explored uncommon scales such as the whole tone scale. Perhaps the most notable innovations of Impressionist composers were the introduction of major 7th chords and the extension of chord structures in 3rds to five- and six-part harmonies.
The influence of visual Impressionism on its musical counterpart is debatable. Claude Debussy and Maurice Ravel are generally considered the greatest Impressionist composers, but Debussy disavowed the term, calling it the invention of critics. Erik Satie was also considered in this category, though his approach was regarded as less serious, more musical novelty in nature. Paul Dukas is another French composer sometimes considered an Impressionist, but his style is perhaps more closely aligned to the late Romanticists. Musical Impressionism beyond France includes the work of such composers as Ottorino Respighi (Italy), Ralph Vaughan Williams, Cyril Scott, and John Ireland (England), Manuel De Falla and Isaac Albeniz (Spain), and Charles Griffes (America).
The term Impressionism has also been used to describe works of literature in which a few select details suffice to convey the sensory impressions of an incident or scene. Impressionist literature is closely related to Symbolism, with its major exemplars being Baudelaire, Mallarmé, Rimbaud, and Verlaine. Authors such as Virginia Woolf, D.H. Lawrence, Henry James, and Joseph Conrad have written works that are Impressionistic in the way that they describe, rather than interpret, the impressions, sensations and emotions that constitute a character's mental life.
During the 1880s several artists began to develop different precepts for the use of colour, pattern, form, and line, derived from the Impressionist example: Vincent van Gogh, Paul Gauguin, Georges Seurat, and Henri de Toulouse-Lautrec. These artists were slightly younger than the Impressionists, and their work is known as post-Impressionism. Some of the original Impressionist artists also ventured into this new territory; Camille Pissarro briefly painted in a pointillist manner, and even Monet abandoned strict plein air painting. Paul Cézanne, who participated in the first and third Impressionist exhibitions, developed a highly individual vision emphasising pictorial structure, and he is more often called a post-Impressionist. Although these cases illustrate the difficulty of assigning labels, the work of the original Impressionist painters may, by definition, be categorised as Impressionism. | [
{
"paragraph_id": 0,
"text": "Impressionism was a 19th-century art movement characterized by relatively small, thin, yet visible brush strokes, open composition, emphasis on accurate depiction of light in its changing qualities (often accentuating the effects of the passage of time), ordinary subject matter, unusual visual angles, and inclusion of movement as a crucial element of human perception and experience. Impressionism originated with a group of Paris-based artists whose independent exhibitions brought them to prominence during the 1870s and 1880s.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Impressionists faced harsh opposition from the conventional art community in France. The name of the style derives from the title of a Claude Monet work, Impression, soleil levant (Impression, Sunrise), which provoked the critic Louis Leroy to coin the term in a satirical 1874 review published in the Parisian newspaper Le Charivari. The development of Impressionism in the visual arts was soon followed by analogous styles in other media that became known as impressionist music and impressionist literature.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Radicals in their time, early Impressionists violated the rules of academic painting. They constructed their pictures from freely brushed colours that took precedence over lines and contours, following the example of painters such as Eugène Delacroix and J. M. W. Turner. They also painted realistic scenes of modern life, and often painted outdoors. Previously, still lifes and portraits as well as landscapes were usually painted in a studio. The Impressionists found that they could capture the momentary and transient effects of sunlight by painting outdoors or en plein air. They portrayed overall visual effects instead of details, and used short \"broken\" brush strokes of mixed and pure unmixed colour—not blended smoothly or shaded, as was customary—to achieve an effect of intense colour vibration.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "Impressionism emerged in France at the same time that a number of other painters, including the Italian artists known as the Macchiaioli, and Winslow Homer in the United States, were also exploring plein-air painting. The Impressionists, however, developed new techniques specific to the style. Encompassing what its adherents argued was a different way of seeing, it is an art of immediacy and movement, of candid poses and compositions, of the play of light expressed in a bright and varied use of colour.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "The public, at first hostile, gradually came to believe that the Impressionists had captured a fresh and original vision, even if the art critics and art establishment disapproved of the new style. By recreating the sensation in the eye that views the subject, rather than delineating the details of the subject, and by creating a welter of techniques and forms, Impressionism is a precursor of various painting styles, including Neo-Impressionism, Post-Impressionism, Fauvism, and Cubism.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "In the middle of the 19th century—a time of rapid industrialization and unsettling social change in France, as Emperor Napoleon III rebuilt Paris and waged war—the Académie des Beaux-Arts dominated French art. The Académie was the preserver of traditional French painting standards of content and style. Historical subjects, religious themes, and portraits were valued; landscape and still life were not. The Académie preferred carefully finished images that looked realistic when examined closely. Paintings in this style were made up of precise brush strokes carefully blended to hide the artist's hand in the work. Colour was restrained and often toned down further by the application of a golden varnish.",
"title": "Beginnings"
},
{
"paragraph_id": 6,
"text": "The Académie had an annual, juried art show, the Salon de Paris, and artists whose work was displayed in the show won prizes, garnered commissions, and enhanced their prestige. The standards of the juries represented the values of the Académie, represented by the works of such artists as Jean-Léon Gérôme and Alexandre Cabanel. Using an eclectic mix of techniques and formulas established in Western painting since the Renaissance—such as linear perspective and figure types derived from Classical Greek art—these artists produced escapist visions of a reassuringly ordered world. By the 1850s, some artists, notably the Realist painter Gustave Courbet, had gained public attention and critical censure by depicting contemporary realities without the idealization demanded by the Académie.",
"title": "Beginnings"
},
{
"paragraph_id": 7,
"text": "In the early 1860s, four young painters—Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, and Frédéric Bazille—met while studying under the academic artist Charles Gleyre. They discovered that they shared an interest in painting landscape and contemporary life rather than historical or mythological scenes. Following a practice—pioneered by artists such as the Englishman John Constable— that had become increasingly popular by mid-century, they often ventured into the countryside together to paint in the open air. Their purpose was not to make sketches to be developed into carefully finished works in the studio, as was the usual custom, but to complete their paintings out-of-doors. By painting in sunlight directly from nature, and making bold use of the vivid synthetic pigments that had become available since the beginning of the century, they began to develop a lighter and brighter manner of painting that extended further the Realism of Courbet and the Barbizon school. A favourite meeting place for the artists was the Café Guerbois on Avenue de Clichy in Paris, where the discussions were often led by Édouard Manet, whom the younger artists greatly admired. They were soon joined by Camille Pissarro, Paul Cézanne, and Armand Guillaumin.",
"title": "Beginnings"
},
{
"paragraph_id": 8,
"text": "During the 1860s, the Salon jury routinely rejected about half of the works submitted by Monet and his friends in favour of works by artists faithful to the approved style. In 1863, the Salon jury rejected Manet's The Luncheon on the Grass (Le déjeuner sur l'herbe) primarily because it depicted a nude woman with two clothed men at a picnic. While the Salon jury routinely accepted nudes in historical and allegorical paintings, they condemned Manet for placing a realistic nude in a contemporary setting. The jury's severely worded rejection of Manet's painting appalled his admirers, and the unusually large number of rejected works that year perturbed many French artists.",
"title": "Beginnings"
},
{
"paragraph_id": 9,
"text": "After Emperor Napoleon III saw the rejected works of 1863, he decreed that the public be allowed to judge the work themselves, and the Salon des Refusés (Salon of the Refused) was organized. While many viewers came only to laugh, the Salon des Refusés drew attention to the existence of a new tendency in art and attracted more visitors than the regular Salon.",
"title": "Beginnings"
},
{
"paragraph_id": 10,
"text": "Artists' petitions requesting a new Salon des Refusés in 1867, and again in 1872, were denied. In December 1873, Monet, Renoir, Pissarro, Sisley, Cézanne, Berthe Morisot, Edgar Degas and several other artists founded the Société Anonyme Coopérative des Artistes Peintres, Sculpteurs, Graveurs (\"Company of Painters, Sculptors, and Engravers\") to exhibit their artworks independently. Members of the association were expected to forswear participation in the Salon. The organizers invited a number of other progressive artists to join them in their inaugural exhibition, including the older Eugène Boudin, whose example had first persuaded Monet to adopt plein air painting years before. Another painter who greatly influenced Monet and his friends, Johan Jongkind, declined to participate, as did Édouard Manet. In total, thirty artists participated in their first exhibition, held in April 1874 at the studio of the photographer Nadar.",
"title": "Beginnings"
},
{
"paragraph_id": 11,
"text": "The critical response was mixed. Monet and Cézanne received the harshest attacks. Critic and humorist Louis Leroy wrote a scathing review in the newspaper Le Charivari in which, making wordplay with the title of Claude Monet's Impression, Sunrise (Impression, soleil levant), he gave the artists the name by which they became known. Derisively titling his article \"The Exhibition of the Impressionists\", Leroy declared that Monet's painting was at most, a sketch, and could hardly be termed a finished work.",
"title": "Beginnings"
},
{
"paragraph_id": 12,
"text": "He wrote, in the form of a dialogue between viewers,",
"title": "Beginnings"
},
{
"paragraph_id": 13,
"text": "The term Impressionist quickly gained favour with the public. It was also accepted by the artists themselves, even though they were a diverse group in style and temperament, unified primarily by their spirit of independence and rebellion. They exhibited together—albeit with shifting membership—eight times between 1874 and 1886. The Impressionists' style, with its loose, spontaneous brushstrokes, would soon become synonymous with modern life.",
"title": "Beginnings"
},
{
"paragraph_id": 14,
"text": "Monet, Sisley, Morisot, and Pissarro may be considered the \"purest\" Impressionists, in their consistent pursuit of an art of spontaneity, sunlight, and colour. Degas rejected much of this, as he believed in the primacy of drawing over colour and belittled the practice of painting outdoors. Renoir turned away from Impressionism for a time during the 1880s, and never entirely regained his commitment to its ideas. Édouard Manet, although regarded by the Impressionists as their leader, never abandoned his liberal use of black as a colour (while Impressionists avoided its use and preferred to obtain darker colours by mixing), and never participated in the Impressionist exhibitions. He continued to submit his works to the Salon, where his painting Spanish Singer had won a 2nd class medal in 1861, and he urged the others to do likewise, arguing that \"the Salon is the real field of battle\" where a reputation could be made.",
"title": "Beginnings"
},
{
"paragraph_id": 15,
"text": "Among the artists of the core group (minus Bazille, who had died in the Franco-Prussian War in 1870), defections occurred as Cézanne, followed later by Renoir, Sisley, and Monet, abstained from the group exhibitions so they could submit their works to the Salon. Disagreements arose from issues such as Guillaumin's membership in the group, championed by Pissarro and Cézanne against opposition from Monet and Degas, who thought him unworthy. Degas invited Mary Cassatt to display her work in the 1879 exhibition, but also insisted on the inclusion of Jean-François Raffaëlli, Ludovic Lepic, and other realists who did not represent Impressionist practices, causing Monet in 1880 to accuse the Impressionists of \"opening doors to first-come daubers\". In this regard, the seventh Paris Impressionist exhibition in 1882 was the most selective of all including the works of only nine \"true\" impressionists, namely Gustave Caillebotte, Paul Gauguin, Armand Guillaumin, Claude Monet, Berthe Morisot, Camille Pissarro, Pierre-Auguste Renoir, Alfred Sisley, and Victor Vignon. The group then divided again over the invitations to Paul Signac and Georges Seurat to exhibit with them at the 8th Impressionist exhibition in 1886. Pissarro was the only artist to show at all eight Paris Impressionist exhibitions.",
"title": "Beginnings"
},
{
"paragraph_id": 16,
"text": "The individual artists achieved few financial rewards from the Impressionist exhibitions, but their art gradually won a degree of public acceptance and support. Their dealer, Durand-Ruel, played a major role in this as he kept their work before the public and arranged shows for them in London and New York. Although Sisley died in poverty in 1899, Renoir had a great Salon success in 1879. Monet became secure financially during the early 1880s and so did Pissarro by the early 1890s. By this time the methods of Impressionist painting, in a diluted form, had become commonplace in Salon art.",
"title": "Beginnings"
},
{
"paragraph_id": 17,
"text": "French painters who prepared the way for Impressionism include the Romantic colourist Eugène Delacroix; the leader of the realists, Gustave Courbet; and painters of the Barbizon school such as Théodore Rousseau. The Impressionists learned much from the work of Johan Barthold Jongkind, Jean-Baptiste-Camille Corot and Eugène Boudin, who painted from nature in a direct and spontaneous style that prefigured Impressionism, and who befriended and advised the younger artists.",
"title": "Impressionist techniques"
},
{
"paragraph_id": 18,
"text": "A number of identifiable techniques and working habits contributed to the innovative style of the Impressionists. Although these methods had been used by previous artists—and are often conspicuous in the work of artists such as Frans Hals, Diego Velázquez, Peter Paul Rubens, John Constable, and J. M. W. Turner—the Impressionists were the first to use them all together, and with such consistency. These techniques include:",
"title": "Impressionist techniques"
},
{
"paragraph_id": 19,
"text": "New technology played a role in the development of the style. Impressionists took advantage of the mid-century introduction of premixed paints in tin tubes (resembling modern toothpaste tubes), which allowed artists to work more spontaneously, both outdoors and indoors. Previously, painters made their own paints individually, by grinding and mixing dry pigment powders with linseed oil, which were then stored in animal bladders.",
"title": "Impressionist techniques"
},
{
"paragraph_id": 20,
"text": "Many vivid synthetic pigments became commercially available to artists for the first time during the 19th century. These included cobalt blue, viridian, cadmium yellow, and synthetic ultramarine blue, all of which were in use by the 1840s, before Impressionism. The Impressionists' manner of painting made bold use of these pigments, and of even newer colours such as cerulean blue, which became commercially available to artists in the 1860s.",
"title": "Impressionist techniques"
},
{
"paragraph_id": 21,
"text": "The Impressionists' progress toward a brighter style of painting was gradual. During the 1860s, Monet and Renoir sometimes painted on canvases prepared with the traditional red-brown or grey ground. By the 1870s, Monet, Renoir, and Pissarro usually chose to paint on grounds of a lighter grey or beige colour, which functioned as a middle tone in the finished painting. By the 1880s, some of the Impressionists had come to prefer white or slightly off-white grounds, and no longer allowed the ground colour a significant role in the finished painting.",
"title": "Impressionist techniques"
},
{
"paragraph_id": 22,
"text": "The Impressionists reacted to modernity by exploring \"a wide range of non-academic subjects in art\" such as middle-class leisure activities and \"urban themes, including train stations, cafés, brothels, the theater, and dance.\" They found inspiration in the newly widened avenues of Paris, bounded by new tall buildings that offered opportunities to depict bustling crowds, popular entertainments, and nocturnal lighting in artificially closed-off spaces. A painting such as Caillebotte's Paris Street; Rainy Day (1877) strikes a modern note by emphasizing the isolation of individuals amid the outsized buildings and spaces of the urban environment. When painting landscapes, the Impressionists did not hesitate to include the factories that were proliferating in the countryside. Earlier painters of landscapes had conventionally avoided smokestacks and other signs of industrialization, regarding them as blights on nature's order and unworthy of art.",
"title": "Content and composition"
},
{
"paragraph_id": 23,
"text": "Prior to the Impressionists, other painters, notably such 17th-century Dutch painters as Jan Steen, had emphasized common subjects, but their methods of composition were traditional. They arranged their compositions so that the main subject commanded the viewer's attention. J. M. W. Turner, while an artist of the Romantic era, anticipated the style of impressionism with his artwork. The Impressionists relaxed the boundary between subject and background so that the effect of an Impressionist painting often resembles a snapshot, a part of a larger reality captured as if by chance. Photography was gaining popularity, and as cameras became more portable, photographs became more candid. Photography inspired Impressionists to represent momentary action, not only in the fleeting lights of a landscape, but in the day-to-day lives of people.",
"title": "Content and composition"
},
{
"paragraph_id": 24,
"text": "The development of Impressionism can be considered partly as a reaction by artists to the challenge presented by photography, which seemed to devalue the artist's skill in reproducing reality. Both portrait and landscape paintings were deemed somewhat deficient and lacking in truth as photography \"produced lifelike images much more efficiently and reliably\".",
"title": "Content and composition"
},
{
"paragraph_id": 25,
"text": "In spite of this, photography actually inspired artists to pursue other means of creative expression, and rather than compete with photography to emulate reality, artists focused \"on the one thing they could inevitably do better than the photograph—by further developing into an art form its very subjectivity in the conception of the image, the very subjectivity that photography eliminated\". The Impressionists sought to express their perceptions of nature, rather than create exact representations. This allowed artists to depict subjectively what they saw with their \"tacit imperatives of taste and conscience\". Photography encouraged painters to exploit aspects of the painting medium, like colour, which photography then lacked: \"The Impressionists were the first to consciously offer a subjective alternative to the photograph\".",
"title": "Content and composition"
},
{
"paragraph_id": 26,
"text": "Another major influence was Japanese ukiyo-e art prints (Japonism). The art of these prints contributed significantly to the \"snapshot\" angles and unconventional compositions that became characteristic of Impressionism. An example is Monet's Jardin à Sainte-Adresse, 1867, with its bold blocks of colour and composition on a strong diagonal slant showing the influence of Japanese prints.",
"title": "Content and composition"
},
{
"paragraph_id": 27,
"text": "Edgar Degas was both an avid photographer and a collector of Japanese prints. His The Dance Class (La classe de danse) of 1874 shows both influences in its asymmetrical composition. The dancers are seemingly caught off guard in various awkward poses, leaving an expanse of empty floor space in the lower right quadrant. He also captured his dancers in sculpture, such as the Little Dancer of Fourteen Years.",
"title": "Content and composition"
},
{
"paragraph_id": 28,
"text": "Impressionists, in varying degrees, were looking for ways to depict visual experience and contemporary subjects. Female Impressionists were interested in these same ideals but had many social and career limitations compared to male Impressionists. They were particularly excluded from the imagery of the bourgeois social sphere of the boulevard, cafe, and dance hall. As well as imagery, women were excluded from the formative discussions that resulted in meetings in those places; that was where male Impressionists were able to form and share ideas about Impressionism. In the academic realm, women were believed to be incapable of handling complex subjects which led teachers to restrict what they taught female students. It was also considered unladylike to excel in art since women's true talents were then believed to center on homemaking and mothering.",
"title": "Female Impressionists"
},
{
"paragraph_id": 29,
"text": "Yet several women were able to find success during their lifetime, even though their careers were affected by personal circumstances – Bracquemond, for example, had a husband who was resentful of her work which caused her to give up painting. The four most well known, namely, Mary Cassatt, Eva Gonzalès, Marie Bracquemond, and Berthe Morisot, are, and were, often referred to as the 'Women Impressionists'. Their participation in the series of eight Impressionist exhibitions that took place in Paris from 1874 to 1886 varied: Morisot participated in seven, Cassatt in four, Bracquemond in three, and Gonzalès did not participate.",
"title": "Female Impressionists"
},
{
"paragraph_id": 30,
"text": "The critics of the time lumped these four together without regard to their personal styles, techniques, or subject matter. Critics viewing their works at the exhibitions often attempted to acknowledge the women artists' talents but circumscribed them within a limited notion of femininity. Arguing for the suitability of Impressionist technique to women's manner of perception, Parisian critic S.C. de Soissons wrote:",
"title": "Female Impressionists"
},
{
"paragraph_id": 31,
"text": "One can understand that women have no originality of thought, and that literature and music have no feminine character; but surely women know how to observe, and what they see is quite different from that which men see, and the art which they put in their gestures, in their toilet, in the decoration of their environment is sufficient to give is the idea of an instinctive, of a peculiar genius which resides in each one of them.",
"title": "Female Impressionists"
},
{
"paragraph_id": 32,
"text": "While Impressionism legitimized the domestic social life as subject matter, of which women had intimate knowledge, it also tended to limit them to that subject matter. Portrayals of often-identifiable sitters in domestic settings (which could offer commissions) were dominant in the exhibitions. The subjects of the paintings were often women interacting with their environment by either their gaze or movement. Cassatt, in particular, was aware of her placement of subjects: she kept her predominantly female figures from objectification and cliche; when they are not reading, they converse, sew, drink tea, and when they are inactive, they seem lost in thought.",
"title": "Female Impressionists"
},
{
"paragraph_id": 33,
"text": "The women Impressionists, like their male counterparts, were striving for \"truth,\" for new ways of seeing and new painting techniques; each artist had an individual painting style. Women Impressionists (particularly Morisot and Cassatt) were conscious of the balance of power between women and objects in their paintings – the bourgeois women depicted are not defined by decorative objects, but instead, interact with and dominate the things with which they live. There are many similarities in their depictions of women who seem both at ease and subtly confined. Gonzalès' Box at the Italian Opera depicts a woman staring into the distance, at ease in a social sphere but confined by the box and the man standing next to her. Cassatt's painting Young Girl at a Window is brighter in color but remains constrained by the canvas edge as she looks out the window.",
"title": "Female Impressionists"
},
{
"paragraph_id": 34,
"text": "Despite their success in their ability to have a career and Impressionism's demise attributed to its allegedly feminine characteristics (its sensuality, dependence on sensation, physicality, and fluidity) the four women artists (and other, lesser-known women Impressionists) were largely omitted from art historical textbooks covering Impressionist artists until Tamar Garb's Women Impressionists published in 1986. For example, Impressionism by Jean Leymarie, published in 1955 included no information on any women Impressionists.",
"title": "Female Impressionists"
},
{
"paragraph_id": 35,
"text": "Painter Androniqi Zengo Antoniu is co-credited with the introduction of impressionism to Albania.",
"title": "Female Impressionists"
},
{
"paragraph_id": 36,
"text": "The central figures in the development of Impressionism in France, listed alphabetically, were:",
"title": "Prominent Impressionists"
},
{
"paragraph_id": 37,
"text": "The Impressionists",
"title": "Timeline: lives of the Impressionists"
},
{
"paragraph_id": 38,
"text": "Among the close associates of the Impressionists, Victor Vignon is the only artist outside the group of prominent names who participated to the most exclusive Seventh Paris Impressionist Exhibition in 1882, which was indeed a rejection to the previous less restricted exhibitions chiefly organized by Degas. Originally from the school of Corot, Vignon was a friend of Camille Pissarro, whose influence is evident in his impressionist style after the late 1870s, and a friend of post-impressionist Vincent van Gogh.",
"title": "Associates and influenced artists"
},
{
"paragraph_id": 39,
"text": "There were several other close associates of the Impressionists who adopted their methods to some degree. These include Jean-Louis Forain (who participated in Impressionist exhibitions in 1879, 1880, 1881 and 1886) and Giuseppe De Nittis, an Italian artist living in Paris who participated in the first Impressionist exhibit at the invitation of Degas, although the other Impressionists disparaged his work. Federico Zandomeneghi was another Italian friend of Degas who showed with the Impressionists. Eva Gonzalès was a follower of Manet who did not exhibit with the group. James Abbott McNeill Whistler was an American-born painter who played a part in Impressionism although he did not join the group and preferred grayed colours. Walter Sickert, an English artist, was initially a follower of Whistler, and later an important disciple of Degas; he did not exhibit with the Impressionists. In 1904 the artist and writer Wynford Dewhurst wrote the first important study of the French painters published in English, Impressionist Painting: its genesis and development, which did much to popularize Impressionism in Great Britain.",
"title": "Associates and influenced artists"
},
{
"paragraph_id": 40,
"text": "By the early 1880s, Impressionist methods were affecting, at least superficially, the art of the Salon. Fashionable painters such as Jean Béraud and Henri Gervex found critical and financial success by brightening their palettes while retaining the smooth finish expected of Salon art. Works by these artists are sometimes casually referred to as Impressionism, despite their remoteness from Impressionist practice.",
"title": "Associates and influenced artists"
},
{
"paragraph_id": 41,
"text": "The influence of the French Impressionists lasted long after most of them had died. Artists like J.D. Kirszenbaum were borrowing Impressionist techniques throughout the twentieth century.",
"title": "Associates and influenced artists"
},
{
"paragraph_id": 42,
"text": "As the influence of Impressionism spread beyond France, artists, too numerous to list, became identified as practitioners of the new style. Some of the more important examples are:",
"title": "Beyond France"
},
{
"paragraph_id": 43,
"text": "The sculptor Auguste Rodin is sometimes called an Impressionist for the way he used roughly modeled surfaces to suggest transient light effects.",
"title": "Sculpture, photography and film"
},
{
"paragraph_id": 44,
"text": "Pictorialist photographers whose work is characterized by soft focus and atmospheric effects have also been called Impressionists.",
"title": "Sculpture, photography and film"
},
{
"paragraph_id": 45,
"text": "French Impressionist Cinema is a term applied to a loosely defined group of films and filmmakers in France from 1919 to 1929, although these years are debatable. French Impressionist filmmakers include Abel Gance, Jean Epstein, Germaine Dulac, Marcel L’Herbier, Louis Delluc, and Dmitry Kirsanoff.",
"title": "Sculpture, photography and film"
},
{
"paragraph_id": 46,
"text": "Musical Impressionism is the name given to a movement in European classical music that arose in the late 19th century and continued into the middle of the 20th century. Originating in France, musical Impressionism is characterized by suggestion and atmosphere, and eschews the emotional excesses of the Romantic era. Impressionist composers favoured short forms such as the nocturne, arabesque, and prelude, and often explored uncommon scales such as the whole tone scale. Perhaps the most notable innovations of Impressionist composers were the introduction of major 7th chords and the extension of chord structures in 3rds to five- and six-part harmonies.",
"title": "Music and literature"
},
{
"paragraph_id": 47,
"text": "The influence of visual Impressionism on its musical counterpart is debatable. Claude Debussy and Maurice Ravel are generally considered the greatest Impressionist composers, but Debussy disavowed the term, calling it the invention of critics. Erik Satie was also considered in this category, though his approach was regarded as less serious, more musical novelty in nature. Paul Dukas is another French composer sometimes considered an Impressionist, but his style is perhaps more closely aligned to the late Romanticists. Musical Impressionism beyond France includes the work of such composers as Ottorino Respighi (Italy), Ralph Vaughan Williams, Cyril Scott, and John Ireland (England), Manuel De Falla and Isaac Albeniz (Spain), and Charles Griffes (America).",
"title": "Music and literature"
},
{
"paragraph_id": 48,
"text": "The term Impressionism has also been used to describe works of literature in which a few select details suffice to convey the sensory impressions of an incident or scene. Impressionist literature is closely related to Symbolism, with its major exemplars being Baudelaire, Mallarmé, Rimbaud, and Verlaine. Authors such as Virginia Woolf, D.H. Lawrence, Henry James, and Joseph Conrad have written works that are Impressionistic in the way that they describe, rather than interpret, the impressions, sensations and emotions that constitute a character's mental life.",
"title": "Music and literature"
},
{
"paragraph_id": 49,
"text": "During the 1880s several artists began to develop different precepts for the use of colour, pattern, form, and line, derived from the Impressionist example: Vincent van Gogh, Paul Gauguin, Georges Seurat, and Henri de Toulouse-Lautrec. These artists were slightly younger than the Impressionists, and their work is known as post-Impressionism. Some of the original Impressionist artists also ventured into this new territory; Camille Pissarro briefly painted in a pointillist manner, and even Monet abandoned strict plein air painting. Paul Cézanne, who participated in the first and third Impressionist exhibitions, developed a highly individual vision emphasising pictorial structure, and he is more often called a post-Impressionist. Although these cases illustrate the difficulty of assigning labels, the work of the original Impressionist painters may, by definition, be categorised as Impressionism.",
"title": "Post-Impressionism"
}
]
| Impressionism was a 19th-century art movement characterized by relatively small, thin, yet visible brush strokes, open composition, emphasis on accurate depiction of light in its changing qualities, ordinary subject matter, unusual visual angles, and inclusion of movement as a crucial element of human perception and experience. Impressionism originated with a group of Paris-based artists whose independent exhibitions brought them to prominence during the 1870s and 1880s. The Impressionists faced harsh opposition from the conventional art community in France. The name of the style derives from the title of a Claude Monet work, Impression, soleil levant, which provoked the critic Louis Leroy to coin the term in a satirical 1874 review published in the Parisian newspaper Le Charivari. The development of Impressionism in the visual arts was soon followed by analogous styles in other media that became known as impressionist music and impressionist literature. | 2001-10-19T12:37:26Z | 2023-12-30T20:39:11Z | [
"Template:Short description",
"Template:Reflist",
"Template:OCLC",
"Template:Wiktionary",
"Template:Gutenberg",
"Template:Impressionists",
"Template:Use dmy dates",
"Template:Clear",
"Template:Circa",
"Template:ISBN",
"Template:Cite web",
"Template:Webarchive",
"Template:Wikiquote",
"Template:About",
"Template:Lang",
"Template:Center",
"Template:Commons category",
"Template:Post-Impressionism",
"Template:Authority control",
"Template:Fact",
"Template:Main",
"Template:Cite book",
"Template:Cite journal",
"Template:Refbegin",
"Template:Refend",
"Template:Navboxes"
]
| https://en.wikipedia.org/wiki/Impressionism |
15,172 | Internet slang | Internet slang (also called Internet shorthand, cyber-slang, netspeak, digispeak or chatspeak) is a non-standard or unofficial form of language used by people on the Internet to communicate to one another. An example of Internet slang is "LOL" meaning "laugh out loud." Since Internet slang is constantly changing, it is difficult to provide a standardized definition. However, it can be understood to be any type of slang that Internet users have popularized, and in many cases, have coined. Such terms often originate with the purpose of saving keystrokes or to compensate for small character limits. Many people use the same abbreviations in texting, instant messaging, and social networking websites. Acronyms, keyboard symbols, and abbreviations are common types of Internet slang. New dialects of slang, such as leet or Lolspeak, develop as ingroup Internet memes rather than time savers. Many people also use Internet slang in face-to-face, real life communication.
Internet slang originated in the early days of the Internet with some terms predating the Internet. The earliest forms of Internet slang assumed people's knowledge of programming and commands in a specific language. Internet slang is used in chat rooms, social networking services, online games, video games and in the online community. Since 1979, users of communications networks like Usenet created their own shorthand.
The primary motivation for using a slang unique to the Internet is to ease communication. However, while Internet slang shortcuts save time for the writer, they take two times as long for the reader to understand, according to a study by the University of Tasmania. On the other hand, similar to the use of slang in traditional face-to-face speech or written language, slang on the Internet is often a way of indicating group membership.
Internet slang provides a channel which facilitates and constrains the ability to communicate in ways that are fundamentally different from those found in other semiotic situations. Many of the expectations and practices which we associate with spoken and written language are no longer applicable. The Internet itself is ideal for new slang to emerge because of the richness of the medium and the availability of information. Slang is also thus motivated for the "creation and sustenance of online communities". These communities, in turn, play a role in solidarity or identification or an exclusive or common cause.
David Crystal distinguishes among five areas of the Internet where slang is used- The Web itself, email, asynchronous chat (for example, mailing lists), synchronous chat (for example, Internet Relay Chat), and virtual worlds. The electronic character of the channel has a fundamental influence on the language of the medium. Options for communication are constrained by the nature of the hardware needed in order to gain Internet access. Thus, productive linguistic capacity (the type of information that can be sent) is determined by the preassigned characters on a keyboard, and receptive linguistic capacity (the type of information that can be seen) is determined by the size and configuration of the screen. Additionally, both sender and receiver are constrained linguistically by the properties of the internet software, computer hardware, and networking hardware linking them. Electronic discourse refers to writing that is "very often reads as if it were being spoken – that is, as if the sender were writing talking".
Internet slang does not constitute a homogeneous language variety; rather, it differs according to the user and type of Internet situation. Audience design occurs in online platforms, and therefore online communities can develop their own sociolects, or shared linguistic norms.
Within the language of Internet slang, there is still an element of prescriptivism, as seen in style guides, for example Wired Style, which are specifically aimed at usage on the Internet. Even so, few users consciously heed these prescriptive recommendations on CMC, but rather adapt their styles based on what they encounter online. Although it is difficult to produce a clear definition of Internet slang, the following types of slang may be observed. This list is not exhaustive.
Many debates about how the use of slang on the Internet influences language outside of the digital sphere go on. Even though the direct causal relationship between the Internet and language has yet to be proven by any scientific research, Internet slang has invited split views on its influence on the standard of language use in non-computer-mediated communications.
Prescriptivists tend to have the widespread belief that the Internet has a negative influence on the future of language, and that it could lead to a degradation of standard. Some would even attribute any decline of standard formal English to the increase in usage of electronic communication. It has also been suggested that the linguistic differences between Standard English and CMC can have implications for literacy education. This is illustrated by the widely reported example of a school essay submitted by a Scottish teenager, which contained many abbreviations and acronyms likened to SMS language. There was great condemnation of this style by the mass media as well as educationists, who expressed that this showed diminishing literacy or linguistic abilities.
On the other hand, descriptivists have counter-argued that the Internet allows better expressions of a language. Rather than established linguistic conventions, linguistic choices sometimes reflect personal taste. It has also been suggested that as opposed to intentionally flouting language conventions, Internet slang is a result of a lack of motivation to monitor speech online. Hale and Scanlon describe language in emails as being derived from "writing the way people talk", and that there is no need to insist on 'Standard' English. English users, in particular, have an extensive tradition of etiquette guides, instead of traditional prescriptive treatises, that offer pointers on linguistic appropriateness. Using and spreading Internet slang also adds onto the cultural currency of a language. It is important to the speakers of the language due to the foundation it provides for identifying within a group, and also for defining a person's individual linguistic and communicative competence. The result is a specialized subculture based on its use of slang.
In scholarly research, attention has, for example, been drawn to the effect of the use of Internet slang in ethnography, and more importantly to how conversational relationships online change structurally because slang is used.
In German, there is already considerable controversy regarding the use of anglicisms outside of CMC. This situation is even more problematic within CMC, since the jargon of the medium is dominated by English terms. An extreme example of an anti-anglicisms perspective can be observed from the chatroom rules of a Christian site, which bans all anglicisms ("Das Verwenden von Anglizismen ist strengstens untersagt!" [Using anglicisms is strictly prohibited!]), and also translates even fundamental terms into German equivalents.
In April 2014, Gawker's editor-in-chief Max Read instituted new writing style guidelines banning internet slang for his writing staff.
Internet slang has crossed from being mediated by the computer into other non-physical domains. Here, these domains are taken to refer to any domain of interaction where interlocutors need not be geographically proximate to one another, and where the Internet is not primarily used. Internet slang is now prevalent in telephony, mainly through short messages (SMS) communication. Abbreviations and interjections, especially, have been popularized in this medium, perhaps due to the limited character space for writing messages on mobile phones. Another possible reason for this spread is the convenience of transferring the existing mappings between expression and meaning into a similar space of interaction.
At the same time, Internet slang has also taken a place as part of everyday offline language, among those with digital access. The nature and content of online conversation is brought forward to direct offline communication through the telephone and direct talking, as well as through written language, such as in writing notes or letters. In the case of interjections, such as numerically based and abbreviated Internet slang, are not pronounced as they are written physically or replaced by any actual action. Rather, they become lexicalized and spoken like non-slang words in a "stage direction" like fashion, where the actual action is not carried out but substituted with a verbal signal. The notions of flaming and trolling have also extended outside the computer, and are used in the same circumstances of deliberate or unintentional implicatures.
The expansion of Internet slang has been furthered through codification and the promotion of digital literacy. The subsequently existing and growing popularity of such references among those online as well as offline has thus advanced Internet slang literacy and globalized it. Awareness and proficiency in manipulating Internet slang in both online and offline communication indicates digital literacy and teaching materials have even been developed to further this knowledge. A South Korean publisher, for example, has published a textbook that details the meaning and context of use for common Internet slang instances and is targeted at young children who will soon be using the Internet. Similarly, Internet slang has been recommended as language teaching material in second language classrooms in order to raise communicative competence by imparting some of the cultural value attached to a language that is available only in slang.
Meanwhile, well-known dictionaries such as the ODE and Merriam-Webster have been updated with a significant and growing body of slang jargon. Besides common examples, lesser known slang and slang with a non-English etymology have also found a place in standardized linguistic references. Along with these instances, literature in user-contributed dictionaries such as Urban Dictionary has also been added to. Codification seems to be qualified through frequency of use, and novel creations are often not accepted by other users of slang.
Although Internet slang began as a means of "opposition" to mainstream language, its popularity with today's globalized digitally literate population has shifted it into a part of everyday language, where it also leaves a profound impact.
Frequently used slang also have become conventionalised into memetic "unit[s] of cultural information". These memes in turn are further spread through their use on the Internet, prominently through websites. The Internet as an "information superhighway" is also catalysed through slang. The evolution of slang has also created a 'slang union' as part of a unique, specialised subculture. Such impacts are, however, limited and requires further discussion especially from the non-English world. This is because Internet slang is prevalent in languages more actively used on the Internet, like English, which is the Internet's lingua franca.
In Japanese, the term moe has come into common use among slang users to mean something "preciously cute" and appealing.
Aside from the more frequent abbreviations, acronyms, and emoticons, Internet slang also uses archaic words or the lesser-known meanings of mainstream terms. Regular words can also be altered into something with a similar pronunciation but altogether different meaning, or attributed new meanings altogether. Phonetic transcriptions are the transformation of words to how it sounds in a certain language, and are used as internet slang. In places where logographic languages are used, such as China, a visual Internet slang exists, giving characters dual meanings, one direct and one implied.
The Internet has helped people from all over the world to become connected to one another, enabling "global" relationships to be formed. As such, it is important for the various types of slang used online to be recognizable for everyone. It is also important to do so because of how other languages are quickly catching up with English on the Internet, following the increase in Internet usage in predominantly non-English speaking countries. In fact, as of January 2020, only approximately 25.9% of the online population is made up of English speakers.
Different cultures tend to have different motivations behind their choice of slang, on top of the difference in language used. For example, in China, because of the tough Internet regulations imposed, users tend to use certain slang to talk about issues deemed as sensitive to the government. These include using symbols to separate the characters of a word to avoid detection from manual or automated text pattern scanning and consequential censorship. An outstanding example is the use of the term river crab to denote censorship. River crab (hexie) is pronounced the same as "harmony"—the official term used to justify political discipline and censorship. As such Chinese netizens reappropriate the official terms in a sarcastic way.
Abbreviations are popular across different cultures, including countries like Japan, China, France, Portugal, etc., and are used according to the particular language the Internet users speak. Significantly, this same style of slang creation is also found in non-alphabetical languages as, for example, a form of "e gao" or alternative political discourse.
The difference in language often results in miscommunication, as seen in an onomatopoeic example, "555", which sounds like "crying" in Chinese, and "laughing" in Thai. A similar example is between the English "haha" and the Spanish "jaja", where both are onomatopoeic expressions of laughter, but the difference in language also meant a different consonant for the same sound to be produced. For more examples of how other languages express "laughing out loud", see also: LOL
In terms of culture, in Chinese, the numerically based onomatopoeia "770880" (simplified Chinese: 亲亲你抱抱你; traditional Chinese: 親親你抱抱你; pinyin: qīn qīn nǐ bào bào nǐ), which means to 'kiss and hug you', is used. This is comparable to "XOXO", which many Internet users use. In French, "pk" or "pq" is used in the place of pourquoi, which means 'why'. This is an example of a combination of onomatopoeia and shortening of the original word for convenience when writing online.
In conclusion, every different country has their own language background and cultural differences and hence, they tend to have their own rules and motivations for their own Internet slang. However, at present, there is still a lack of studies done by researchers on some differences between the countries.
On the whole, the popular use of Internet slang has resulted in a unique online and offline community as well as a couple sub-categories of "special internet slang which is different from other slang spread on the whole internet... similar to jargon... usually decided by the sharing community". It has also led to virtual communities marked by the specific slang they use and led to a more homogenized yet diverse online culture.
Internet slang is considered a form of advertisement. Through two empirical studies, it was proven that Internet slang could help promote or capture the crowd's attention through advertisement, but did not increase the sales of the product. However, using Internet slang in advertisement may attract a certain demographic, and might not be the best to use depending on the product or goods. Furthermore, an overuse of Internet slang also negatively effects the brand due to quality of the advertisement, but using an appropriate amount would be sufficient in providing more attention to the ad. According to the experiment, Internet slang helped capture the attention of the consumers of necessity items. However, the demographic of luxury goods differ, and using Internet slang would potentially have the brand lose credibility due to the appropriateness of Internet slang. | [
{
"paragraph_id": 0,
"text": "Internet slang (also called Internet shorthand, cyber-slang, netspeak, digispeak or chatspeak) is a non-standard or unofficial form of language used by people on the Internet to communicate to one another. An example of Internet slang is \"LOL\" meaning \"laugh out loud.\" Since Internet slang is constantly changing, it is difficult to provide a standardized definition. However, it can be understood to be any type of slang that Internet users have popularized, and in many cases, have coined. Such terms often originate with the purpose of saving keystrokes or to compensate for small character limits. Many people use the same abbreviations in texting, instant messaging, and social networking websites. Acronyms, keyboard symbols, and abbreviations are common types of Internet slang. New dialects of slang, such as leet or Lolspeak, develop as ingroup Internet memes rather than time savers. Many people also use Internet slang in face-to-face, real life communication.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Internet slang originated in the early days of the Internet with some terms predating the Internet. The earliest forms of Internet slang assumed people's knowledge of programming and commands in a specific language. Internet slang is used in chat rooms, social networking services, online games, video games and in the online community. Since 1979, users of communications networks like Usenet created their own shorthand.",
"title": "Creation and evolution"
},
{
"paragraph_id": 2,
"text": "The primary motivation for using a slang unique to the Internet is to ease communication. However, while Internet slang shortcuts save time for the writer, they take two times as long for the reader to understand, according to a study by the University of Tasmania. On the other hand, similar to the use of slang in traditional face-to-face speech or written language, slang on the Internet is often a way of indicating group membership.",
"title": "Creation and evolution"
},
{
"paragraph_id": 3,
"text": "Internet slang provides a channel which facilitates and constrains the ability to communicate in ways that are fundamentally different from those found in other semiotic situations. Many of the expectations and practices which we associate with spoken and written language are no longer applicable. The Internet itself is ideal for new slang to emerge because of the richness of the medium and the availability of information. Slang is also thus motivated for the \"creation and sustenance of online communities\". These communities, in turn, play a role in solidarity or identification or an exclusive or common cause.",
"title": "Creation and evolution"
},
{
"paragraph_id": 4,
"text": "David Crystal distinguishes among five areas of the Internet where slang is used- The Web itself, email, asynchronous chat (for example, mailing lists), synchronous chat (for example, Internet Relay Chat), and virtual worlds. The electronic character of the channel has a fundamental influence on the language of the medium. Options for communication are constrained by the nature of the hardware needed in order to gain Internet access. Thus, productive linguistic capacity (the type of information that can be sent) is determined by the preassigned characters on a keyboard, and receptive linguistic capacity (the type of information that can be seen) is determined by the size and configuration of the screen. Additionally, both sender and receiver are constrained linguistically by the properties of the internet software, computer hardware, and networking hardware linking them. Electronic discourse refers to writing that is \"very often reads as if it were being spoken – that is, as if the sender were writing talking\".",
"title": "Creation and evolution"
},
{
"paragraph_id": 5,
"text": "Internet slang does not constitute a homogeneous language variety; rather, it differs according to the user and type of Internet situation. Audience design occurs in online platforms, and therefore online communities can develop their own sociolects, or shared linguistic norms.",
"title": "Types of slang"
},
{
"paragraph_id": 6,
"text": "Within the language of Internet slang, there is still an element of prescriptivism, as seen in style guides, for example Wired Style, which are specifically aimed at usage on the Internet. Even so, few users consciously heed these prescriptive recommendations on CMC, but rather adapt their styles based on what they encounter online. Although it is difficult to produce a clear definition of Internet slang, the following types of slang may be observed. This list is not exhaustive.",
"title": "Types of slang"
},
{
"paragraph_id": 7,
"text": "Many debates about how the use of slang on the Internet influences language outside of the digital sphere go on. Even though the direct causal relationship between the Internet and language has yet to be proven by any scientific research, Internet slang has invited split views on its influence on the standard of language use in non-computer-mediated communications.",
"title": "Views"
},
{
"paragraph_id": 8,
"text": "Prescriptivists tend to have the widespread belief that the Internet has a negative influence on the future of language, and that it could lead to a degradation of standard. Some would even attribute any decline of standard formal English to the increase in usage of electronic communication. It has also been suggested that the linguistic differences between Standard English and CMC can have implications for literacy education. This is illustrated by the widely reported example of a school essay submitted by a Scottish teenager, which contained many abbreviations and acronyms likened to SMS language. There was great condemnation of this style by the mass media as well as educationists, who expressed that this showed diminishing literacy or linguistic abilities.",
"title": "Views"
},
{
"paragraph_id": 9,
"text": "On the other hand, descriptivists have counter-argued that the Internet allows better expressions of a language. Rather than established linguistic conventions, linguistic choices sometimes reflect personal taste. It has also been suggested that as opposed to intentionally flouting language conventions, Internet slang is a result of a lack of motivation to monitor speech online. Hale and Scanlon describe language in emails as being derived from \"writing the way people talk\", and that there is no need to insist on 'Standard' English. English users, in particular, have an extensive tradition of etiquette guides, instead of traditional prescriptive treatises, that offer pointers on linguistic appropriateness. Using and spreading Internet slang also adds onto the cultural currency of a language. It is important to the speakers of the language due to the foundation it provides for identifying within a group, and also for defining a person's individual linguistic and communicative competence. The result is a specialized subculture based on its use of slang.",
"title": "Views"
},
{
"paragraph_id": 10,
"text": "In scholarly research, attention has, for example, been drawn to the effect of the use of Internet slang in ethnography, and more importantly to how conversational relationships online change structurally because slang is used.",
"title": "Views"
},
{
"paragraph_id": 11,
"text": "In German, there is already considerable controversy regarding the use of anglicisms outside of CMC. This situation is even more problematic within CMC, since the jargon of the medium is dominated by English terms. An extreme example of an anti-anglicisms perspective can be observed from the chatroom rules of a Christian site, which bans all anglicisms (\"Das Verwenden von Anglizismen ist strengstens untersagt!\" [Using anglicisms is strictly prohibited!]), and also translates even fundamental terms into German equivalents.",
"title": "Views"
},
{
"paragraph_id": 12,
"text": "In April 2014, Gawker's editor-in-chief Max Read instituted new writing style guidelines banning internet slang for his writing staff.",
"title": "Views"
},
{
"paragraph_id": 13,
"text": "Internet slang has crossed from being mediated by the computer into other non-physical domains. Here, these domains are taken to refer to any domain of interaction where interlocutors need not be geographically proximate to one another, and where the Internet is not primarily used. Internet slang is now prevalent in telephony, mainly through short messages (SMS) communication. Abbreviations and interjections, especially, have been popularized in this medium, perhaps due to the limited character space for writing messages on mobile phones. Another possible reason for this spread is the convenience of transferring the existing mappings between expression and meaning into a similar space of interaction.",
"title": "Use beyond computer-mediated communication"
},
{
"paragraph_id": 14,
"text": "At the same time, Internet slang has also taken a place as part of everyday offline language, among those with digital access. The nature and content of online conversation is brought forward to direct offline communication through the telephone and direct talking, as well as through written language, such as in writing notes or letters. In the case of interjections, such as numerically based and abbreviated Internet slang, are not pronounced as they are written physically or replaced by any actual action. Rather, they become lexicalized and spoken like non-slang words in a \"stage direction\" like fashion, where the actual action is not carried out but substituted with a verbal signal. The notions of flaming and trolling have also extended outside the computer, and are used in the same circumstances of deliberate or unintentional implicatures.",
"title": "Use beyond computer-mediated communication"
},
{
"paragraph_id": 15,
"text": "The expansion of Internet slang has been furthered through codification and the promotion of digital literacy. The subsequently existing and growing popularity of such references among those online as well as offline has thus advanced Internet slang literacy and globalized it. Awareness and proficiency in manipulating Internet slang in both online and offline communication indicates digital literacy and teaching materials have even been developed to further this knowledge. A South Korean publisher, for example, has published a textbook that details the meaning and context of use for common Internet slang instances and is targeted at young children who will soon be using the Internet. Similarly, Internet slang has been recommended as language teaching material in second language classrooms in order to raise communicative competence by imparting some of the cultural value attached to a language that is available only in slang.",
"title": "Use beyond computer-mediated communication"
},
{
"paragraph_id": 16,
"text": "Meanwhile, well-known dictionaries such as the ODE and Merriam-Webster have been updated with a significant and growing body of slang jargon. Besides common examples, lesser known slang and slang with a non-English etymology have also found a place in standardized linguistic references. Along with these instances, literature in user-contributed dictionaries such as Urban Dictionary has also been added to. Codification seems to be qualified through frequency of use, and novel creations are often not accepted by other users of slang.",
"title": "Use beyond computer-mediated communication"
},
{
"paragraph_id": 17,
"text": "Although Internet slang began as a means of \"opposition\" to mainstream language, its popularity with today's globalized digitally literate population has shifted it into a part of everyday language, where it also leaves a profound impact.",
"title": "Use beyond computer-mediated communication"
},
{
"paragraph_id": 18,
"text": "Frequently used slang also have become conventionalised into memetic \"unit[s] of cultural information\". These memes in turn are further spread through their use on the Internet, prominently through websites. The Internet as an \"information superhighway\" is also catalysed through slang. The evolution of slang has also created a 'slang union' as part of a unique, specialised subculture. Such impacts are, however, limited and requires further discussion especially from the non-English world. This is because Internet slang is prevalent in languages more actively used on the Internet, like English, which is the Internet's lingua franca.",
"title": "Use beyond computer-mediated communication"
},
{
"paragraph_id": 19,
"text": "In Japanese, the term moe has come into common use among slang users to mean something \"preciously cute\" and appealing.",
"title": "Around the world"
},
{
"paragraph_id": 20,
"text": "Aside from the more frequent abbreviations, acronyms, and emoticons, Internet slang also uses archaic words or the lesser-known meanings of mainstream terms. Regular words can also be altered into something with a similar pronunciation but altogether different meaning, or attributed new meanings altogether. Phonetic transcriptions are the transformation of words to how it sounds in a certain language, and are used as internet slang. In places where logographic languages are used, such as China, a visual Internet slang exists, giving characters dual meanings, one direct and one implied.",
"title": "Around the world"
},
{
"paragraph_id": 21,
"text": "The Internet has helped people from all over the world to become connected to one another, enabling \"global\" relationships to be formed. As such, it is important for the various types of slang used online to be recognizable for everyone. It is also important to do so because of how other languages are quickly catching up with English on the Internet, following the increase in Internet usage in predominantly non-English speaking countries. In fact, as of January 2020, only approximately 25.9% of the online population is made up of English speakers.",
"title": "Around the world"
},
{
"paragraph_id": 22,
"text": "Different cultures tend to have different motivations behind their choice of slang, on top of the difference in language used. For example, in China, because of the tough Internet regulations imposed, users tend to use certain slang to talk about issues deemed as sensitive to the government. These include using symbols to separate the characters of a word to avoid detection from manual or automated text pattern scanning and consequential censorship. An outstanding example is the use of the term river crab to denote censorship. River crab (hexie) is pronounced the same as \"harmony\"—the official term used to justify political discipline and censorship. As such Chinese netizens reappropriate the official terms in a sarcastic way.",
"title": "Around the world"
},
{
"paragraph_id": 23,
"text": "Abbreviations are popular across different cultures, including countries like Japan, China, France, Portugal, etc., and are used according to the particular language the Internet users speak. Significantly, this same style of slang creation is also found in non-alphabetical languages as, for example, a form of \"e gao\" or alternative political discourse.",
"title": "Around the world"
},
{
"paragraph_id": 24,
"text": "The difference in language often results in miscommunication, as seen in an onomatopoeic example, \"555\", which sounds like \"crying\" in Chinese, and \"laughing\" in Thai. A similar example is between the English \"haha\" and the Spanish \"jaja\", where both are onomatopoeic expressions of laughter, but the difference in language also meant a different consonant for the same sound to be produced. For more examples of how other languages express \"laughing out loud\", see also: LOL",
"title": "Around the world"
},
{
"paragraph_id": 25,
"text": "In terms of culture, in Chinese, the numerically based onomatopoeia \"770880\" (simplified Chinese: 亲亲你抱抱你; traditional Chinese: 親親你抱抱你; pinyin: qīn qīn nǐ bào bào nǐ), which means to 'kiss and hug you', is used. This is comparable to \"XOXO\", which many Internet users use. In French, \"pk\" or \"pq\" is used in the place of pourquoi, which means 'why'. This is an example of a combination of onomatopoeia and shortening of the original word for convenience when writing online.",
"title": "Around the world"
},
{
"paragraph_id": 26,
"text": "In conclusion, every different country has their own language background and cultural differences and hence, they tend to have their own rules and motivations for their own Internet slang. However, at present, there is still a lack of studies done by researchers on some differences between the countries.",
"title": "Around the world"
},
{
"paragraph_id": 27,
"text": "On the whole, the popular use of Internet slang has resulted in a unique online and offline community as well as a couple sub-categories of \"special internet slang which is different from other slang spread on the whole internet... similar to jargon... usually decided by the sharing community\". It has also led to virtual communities marked by the specific slang they use and led to a more homogenized yet diverse online culture.",
"title": "Around the world"
},
{
"paragraph_id": 28,
"text": "Internet slang is considered a form of advertisement. Through two empirical studies, it was proven that Internet slang could help promote or capture the crowd's attention through advertisement, but did not increase the sales of the product. However, using Internet slang in advertisement may attract a certain demographic, and might not be the best to use depending on the product or goods. Furthermore, an overuse of Internet slang also negatively effects the brand due to quality of the advertisement, but using an appropriate amount would be sufficient in providing more attention to the ad. According to the experiment, Internet slang helped capture the attention of the consumers of necessity items. However, the demographic of luxury goods differ, and using Internet slang would potentially have the brand lose credibility due to the appropriateness of Internet slang.",
"title": "Internet slang in advertisements"
}
]
| Internet slang is a non-standard or unofficial form of language used by people on the Internet to communicate to one another. An example of Internet slang is "LOL" meaning "laugh out loud." Since Internet slang is constantly changing, it is difficult to provide a standardized definition. However, it can be understood to be any type of slang that Internet users have popularized, and in many cases, have coined. Such terms often originate with the purpose of saving keystrokes or to compensate for small character limits. Many people use the same abbreviations in texting, instant messaging, and social networking websites. Acronyms, keyboard symbols, and abbreviations are common types of Internet slang. New dialects of slang, such as leet or Lolspeak, develop as ingroup Internet memes rather than time savers. Many people also use Internet slang in face-to-face, real life communication. | 2001-10-29T19:43:57Z | 2023-12-19T17:22:05Z | [
"Template:Cite web",
"Template:Use dmy dates",
"Template:Short description",
"Template:IPA",
"Template:Div col",
"Template:Lang",
"Template:Cite magazine",
"Template:Cbignore",
"Template:Unbulleted list",
"Template:Internet slang",
"Template:About",
"Template:Reflist",
"Template:Cite journal",
"Template:Commons category",
"Template:Internet",
"Template:Portal",
"Template:Cite book",
"Template:Zh",
"Template:Annotated link",
"Template:Cite CiteSeerX",
"Template:Wiktionary",
"Template:Authority control",
"Template:'",
"Template:ISSN",
"Template:Cite news",
"Template:Div col end",
"Template:ISBN",
"Template:Internet dialects"
]
| https://en.wikipedia.org/wiki/Internet_slang |
15,174 | Impi | Impi is a Nguni word meaning war or combat and by association any body of men gathered for war, for example impi ya masosha is a term denoting an army. Impi were formed from regiments (amabutho) from amakhanda (large militarised homesteads). In English impi is often used to refer to a Zulu regiment, which is called an ibutho in Zulu or the army.
Its beginnings lie far back in historic local warfare customs, when groups of armed men called impi battled. They were systematised radically by the Zulu king Shaka, who was then only the exiled illegitimate son of king Senzangakhona kaJama, but already showing much prowess as a general in the army (impi) of Mthethwa king Dingiswayo in the Ndwandwe–Zulu War of 1817–1819.
The Zulu impi is popularly identified with the ascent of Shaka, ruler of the relatively small Zulu tribe before its explosion across the landscape of southern Africa, but its earliest shape as an instrument of statecraft lies in the innovations of the Mthethwa chieftain Dingiswayo, according to some historians (Morris 1965). These innovations in turn drew upon existing tribal customs, such as the iNtanga. This was an age grade tradition common among many of the Bantu peoples of the continent's southern region. Young men were organised into age groups, with each cohort responsible for certain duties and tribal ceremonies. Periodically, the older age grades were summoned to the kraals of sub-chieftains, or inDunas, for consultations, assignments, and an induction ceremony that marked their transition from boys to full-fledged adults and warriors, the ukuButwa. Kraal or settlement elders generally handled local disputes and issues. Above them were the inDunas, and above the inDunas stood the chief of a particular clan lineage or tribe. The inDunas handled administrative matters for their chiefs – ranging from settlement of disputes, to the collection of taxes. In time of war, the inDunas supervised the fighting men in their areas, forming leadership of the military forces deployed for combat. The age grade iNtangas, under the guidance of the inDunas, formed the basis for the systematic regimental organisation that would become known worldwide as the impi.
Warfare was of low intensity among the KwaZulu Natal tribes prior to the rise of Shaka, though it occurred frequently. Objectives were typically limited to such matters as cattle raiding, avenging some personal insult, or resolving disputes over segments of grazing land. Generally a loose mob, called an impi participated in these melees. There were no campaigns of extermination against the defeated. They simply moved on to other open spaces on the veldt, and equilibrium was restored.
The bow and arrow were known but seldom used. Warfare, like the hunt, depended on skilled spearmen and trackers. The primary weapon was a thin six-foot (1.8 m) throwing spear, the assegai; several were carried into combat. Defensive weapons included a small cowhide shield, which was later improved by King Shaka. Many battles were prearranged, with the clan warriors meeting at an agreed place and time while women and children of the clan watched from some distance away. Ritualized taunts, single combats and tentative charges were the typical pattern. If the affair did not dissipate before, one side might find enough courage to mount a sustained attack and drive their enemies. Casualties were usually light. The defeated clan might pay in lands or cattle and have captives to be ransomed but extermination and mass casualties were rare. Tactics were rudimentary.
Outside the ritual battles, the quick raid was the most frequent combat action, marked by burning kraals, seizure of captives, and the driving off of cattle. Pastoral herders and light agriculturalists, the Bantu did not usually build permanent fortifications to fend off enemies. A clan under threat simply packed their meagre material possessions, rounded up their cattle and fled until the marauders were gone. If the marauders did not stay to permanently dispossess them of grazing areas, the fleeing clan might return to rebuild in a day or two. The genesis of the Zulu impi thus lies in tribal structures existing long before the coming of Europeans or the Shaka era.
In the early 19th century, a combination of factors began to change the customary pattern. These included rising populations, the growth of white settlement and slaving that dispossessed native peoples both at the Cape and in Portuguese Mozambique, and the rise of ambitious "new men." One such man, a warrior called Dingiswayo (the Troubled One) of the Mthethwa rose to prominence. Historians such as Donald Morris hold that his political genius laid the basis for a relatively light hegemony. This was established through a combination of diplomacy and conquest, using not extermination or slavery, but strategic reconciliation and judicious force of arms. This hegemony reduced the frequent feuding and fighting among the small clans in the Mthethwa's orbit, transferring their energies to more centralised forces. Under Dingiswayo the age grades came to be regarded as military drafts, deployed more frequently to maintain the new order. It was from these small clans, including among them the eLangeni and the Zulu, that Shaka sprung.
Shaka proved himself to be one of Dingiswayo's most able warriors after the military call up of his age grade to serve in the Mthethwa forces. He fought with his iziCwe regiment wherever he was assigned during this early period, but from the beginning, Shaka's approach to battle did not fit the traditional mould. He began to implement his own individual methods and style, designing the famous short stabbing spear the iKlwa, a larger, stronger shield, and discarding the oxhide sandals that he felt slowed him down. These methods proved effective on a small scale, but Shaka himself was restrained by his overlord. His conception of warfare was far more extreme than the reconcilitory methods of Dingiswayo. He sought to bring combat to a swift and bloody decision, as opposed to duels of individual champions, scattered raids, or limited skirmishes where casualties were comparatively light. While his mentor and overlord Dingiswayo lived, Shakan methods were reined in, but the removal of this check gave the Zulu chieftain much broader scope. It was under his rule that a much more rigorous mode of tribal warfare came into being. This newer, brutal focus demanded changes in weapons, organisation and tactics.
Shaka is credited with introducing a new variant of the traditional weapon, demoting the long, spindly throwing spear in favour of a heavy-bladed, short-shafted stabbing spear. He is also said to have introduced a larger, heavier cowhide shield (isihlangu), and trained his forces to thus close with the enemy in more effective hand-to-hand combat. The throwing spear was not discarded, but standardised like the stabbing implement and carried as a missile weapon, typically discharged at the foe, before close contact. These weapons changes integrated with and facilitated an aggressive mobility and tactical organisation.
As weapons, the Zulu warrior carried the iklwa stabbing spear (losing one could result in execution) and a club or cudgel fashioned from dense hardwood known in Zulu as the iwisa, usually called the knobkerrie or knobkerry in English and knopkierie in Afrikaans, for beating an enemy in the manner of a mace. Zulu officers often carried the half-moon-shaped Zulu ax, but this weapon was more of a symbol to show their rank. The iklwa – so named because of the sucking sound it made when withdrawn from a human body – with its long 25 centimetres (9.8 in) and broad blade was an invention of Shaka that superseded the older thrown ipapa (so named because of the "pa-pa" sound it made as it flew through the air). The iklwa could theoretically be used both in melee and as a thrown weapon, but warriors were forbidden in Shaka's day from throwing it, which would disarm them and give their opponents something to throw back. Moreover, Shaka felt it discouraged warriors from closing into hand-to-hand combat.
Shaka's brother, and successor, Dingane kaSenzangakhona reintroduced greater use of the throwing spear, perhaps as a counter to Boer firearms.
As early as Shaka's reign small numbers of firearms, often obsolete muskets and rifles, were obtained by the Zulus from Europeans by trade. In the aftermath of the defeat of the British at the Battle of Isandlwana in 1879, many Martini–Henry rifles were captured by the Zulus together with considerable amounts of ammunition. The advantage of this capture is debatable due to the alleged tendency of Zulu warriors to close their eyes when firing such weapons. The possession of firearms did little to change Zulu tactics, which continued to rely on a swift approach to the enemy to bring him into close combat.
All warriors carried a shield made of oxhide, which retained the hair, with a central stiffening shaft of wood, the mgobo. Shields were the property of the king; they were stored in specialised structures raised off the ground for protection from vermin when not issued to the relevant regiment. The large isihlangu shield of Shaka's day was about five feet in length and was later partially replaced by the smaller umbumbuluzo, a shield of identical manufacture but around three and a half feet in length. Close combat relied on co-ordinated use of the iklwa and shield. The warrior sought to get the edge of his shield behind the edge of his enemy's, so that he could pull the enemy's shield to the side, thus opening him to a thrust with the iklwa deep into the abdomen or chest.
The fast-moving host, like all military formations, needed supplies. These were provided by young boys, who were attached to a force and carried rations, cooking pots, sleeping mats, extra weapons and other material. Cattle were sometimes driven on the hoof as a movable larder. Again, such arrangements in the local context were probably nothing unusual. What was different was the systematisation and organisation, a pattern yielding major benefits when the Zulu were dispatched on raiding missions.
Age-grade groupings of various sorts were common in the Bantu tribal culture of the day, and indeed are still important in much of Africa. Age grades were responsible for a variety of activities, from guarding the camp, to cattle herding, to certain rituals and ceremonies. It was customary in Zulu culture for young men to provide limited service to their local chiefs until they were married and recognised as official householders. Shaka manipulated this system, transferring the customary service period from the regional clan leaders to himself, strengthening his personal hegemony. Such groupings on the basis of age, did not constitute a permanent, paid military in the modern Western sense, nevertheless they did provide a stable basis for sustained armed mobilisation, much more so than ad hoc tribal levies or war parties.
Shaka organised the various age grades into regiments, and quartered them in special military kraals, with each regiment having its own distinctive names and insignia. Some historians argue that the large military establishment was a drain on the Zulu economy and necessitated continual raiding and expansion. This may be true since large numbers of the society's men were isolated from normal occupations, but whatever the resource impact, the regimental system clearly built on existing tribal cultural elements that could be adapted and shaped to fit an expansionist agenda.
After their 20th birthdays, young men would be sorted into formal ibutho (plural amabutho) or regiments. They would build their i=handa (often referred to as a 'homestead', as it was basically a stockaded group of huts surrounding a corral for cattle), their gathering place when summoned for active service. Active service continued until a man married, a privilege only the king bestowed. The amabutho were recruited on the basis of age rather than regional or tribal origin. The reason for this was to enhance the centralised power of the Zulu king at the expense of clan and tribal leaders. They swore loyalty to the king of the Zulu nation.
Shaka discarded sandals to enable his warriors to run faster. Initially the move was unpopular, but those who objected were simply killed, a practice that quickly concentrated the minds of remaining personnel. Zulu tradition indicates that Shaka hardened the feet of his troops by having them stamp thorny tree and bush branches flat. Shaka drilled his troops frequently, implementing forced marches covering more than fifty miles a day. He also drilled the troops to carry out encirclement tactics (see below). Such mobility gave the Zulu a significant impact in their local region and beyond. Upkeep of the regimental system and training seems to have continued after Shaka's death, although Zulu defeats by the Boers, and growing encroachment by British colonists, sharply curtailed raiding operations prior to the War of 1879. Morris (1965, 1982) records one such mission under King Mpande to give green warriors of the uThulwana regiment experience: a raid into Swaziland, dubbed "Fund' uThulwana" by the Zulu, or "Teach the uThulwana".
Impi warriors were trained as early as age six, joining the army as udibi porters at first, being enrolled into same-age groups (intanga). Until they were buta'd, Zulu boys accompanied their fathers and brothers on campaign as servants. Eventually, they would go to the nearest ikhanda to kleza (literally, "to drink directly from the udder"), at which time the boys would become inkwebane, cadets. They would spend their time training until they were formally enlisted by the king. They would challenge each other to stick fights, which had to be accepted on pain of dishonor.
In Shaka's day, warriors often wore elaborate plumes and cow tail regalia in battle, but by the Anglo-Zulu War of 1879, many warriors wore only a loin cloth and a minimal form of headdress. The later period Zulu soldier went into battle relatively simply dressed, painting his upper body and face with chalk and red ochre, despite the popular conception of elaborately panoplied warriors. Each ibutho had a singular arrangement of headdress and other adornments, so that the Zulu army could be said to have had regimental uniforms; latterly the 'full-dress' was only worn on festive occasions. The men of senior regiments would wear, in addition to their other headdress, the head-ring (isicoco) denoting their married state. A gradation of shield colour was found, junior regiments having largely dark shields the more senior ones having shields with more light colouring; Shaka's personal regiment Fasimba (The Haze) having white shields with only a small patch of darker colour. This shield uniformity was facilitated by the custom of separating the king's cattle into herds based on their coat colours.
Certain adornments were awarded to individual warriors for conspicuous courage in action; these included a type of heavy brass arm-ring (ingxotha) and an intricate necklace composed of interlocking wooden pegs (iziqu).
The Zulu typically took the offensive, deploying in the well known "buffalo horns" formation. The attack layout was composed of four elements, each of which represented a grouping of Zulu regiments:
Encirclement tactics were not unique in the region and attempts to surround an enemy were not unknown even in the ritualised battles. The use of separate manoeuvre elements to support a stronger central group was also known in pre-mechanised tribal warfare, as is the use of reserve echelons farther back. What was unique about the Zulu was the degree of organisation, consistency with which they used these tactics, and the speed at which they executed them. Developments and refinements may have taken place after Shaka's death, as witnessed by the use of larger groupings of regiments by the Zulu against the British in 1879. Missions, available manpower and enemies varied, but whether facing native spear, or European bullet, the impis generally fought in and adhered to the classical buffalo horns pattern.
Organization. The Zulu forces were generally grouped into 3 levels: regiments, corps of several regiments, and "armies" or bigger formations, although the Zulu did not use these terms in the modern sense. Size distinctions were taken account of, any grouping of men on a mission could collectively be called an impi, whether a raiding party of 100 or horde of 10,000. Numbers were not uniform, but dependent on a variety of factors including assignments by the king, or the manpower mustered by various clan chiefs or localities. A regiment might be 400 or 4000 men. These were grouped into Corps that took their name from the military kraals where they were mustered, or sometimes the dominant regiment of that locality. While the modest Zulu population could not turn out the hundreds of thousand available to major world or continental powers like France, Britain, or Russia, the Zulu "nation in arms" approach could mobilize substantial forces in local context for short campaigns, and maneuver them in the Western equivalent of divisional strength. The victory won by Zulu king Cetshwayo at Ndondakusuka, for example, two decades before the Anglo-Zulu War of 1879, involved a battlefield deployment of 30,000 troops.
Higher command and unit leadership. An inDuna guided each regiment, and he in turn answered to senior izinduna who controlled the corps grouping. Overall guidance of the host was furnished by elder izinduna usually with many years of experience. One or more of these elder chiefs might accompany a big force on an important mission. Coordination of tactical movements was supplied by the indunas who used hand signals and messengers. Generally before deploying for battle, the regiments were made to squat in a semicircle while these commanders made final assignments and adjustments. Lower level regimental izinduna, like the NCOs of today's armies, and yesterday's Roman centurions, were extremely important to morale and discipline. Prior to the clash at Isandhlwana for example, they imposed order on the frenzied rush of warriors eager to get at the British, and steadied those faltering under withering enemy fire during the battle. The widely spaced maneuvers of an impi sometimes could make control problematic once an attack was unleashed. Indeed, the Zulu attacks on the British strongpoints at Rorke's Drift and at Kambula, (both bloody defeats) seemed to have been carried out by over-enthusiastic leaders and warriors despite contrary orders of the Zulu King, Cetshwayo. Such over-confidence or disobedience by thrusting leaders or forces is not unusual in warfare. At the Battle of Trebia for example, the over-confident Roman commander Sempronius was provoked into a hasty attack, that resulted in a defeat for Roman arms. Likewise, General George Custer disobeyed the orders of his superior, General Terry, and rashly launched a disastrous charge against Indian forces at the Battle of the Little Bighorn, resulting in the total destruction of his command. Popular film re-enactments display a grizzled izinduna directing the Zulu host from a promontory with elegant sweeps of the hand, and the reserves still lay within top commanders' overall control. Coordination after an army was set in motion however relied more on the initial pre-positioning and assignments of the regiments before the advance, and the deep understanding by Zulu officers of the general attack plan. These sub-commanders could thus slow down or speed up their approach runs to maintain the general "buffalo horns" alignment to match terrain and situation.
As noted above, Shaka was neither the originator of the impi, or the age grade structure, nor the concept of a bigger grouping than the small clan system. His major innovations were to blend these traditional elements in a new way, to systematise the approach to battle, and to standardise organization, methods and weapons, particularly in his adoption of the ilkwa – the Zulu thrusting spear, unique long-term regimental units, and the "buffalo horns" formation. Dingswayo's approach was of a loose federation of allies under his hegemony, combining to fight, each with their own contingents, under their own leaders. Shaka dispensed with this, insisting instead on a standardised organisation and weapons package that swept away and replaced old clan allegiances with loyalty to himself. This uniform approach also encouraged the loyalty and identification of warriors with their own distinctive military regiments. In time, these warriors, from many conquered tribes and clans came to regard themselves as one nation- the Zulu. The so-called Marian reforms of Rome in the military sphere are referenced by some writers as similar. While other ancient powers such as the Carthaginians maintained a patchwork of force types, and the legions retained such phalanx-style holdovers like the triarii, later writers would attribute to Marius the implementation of one consistent standardised approach for all the infantry that likely actually took place gradually across many years. This enabled more disciplined formations and efficient execution of tactics over time against a variety of enemies. As one military historian notes:
To understand the full scope of the impi's performance in battle, military historians of the Zulu typically look to its early operations against internal African enemies, not merely the British interlude. In terms of numbers, the operations of the impi would change—from the Western equivalent of small company and battalion size forces, to manoeuvres in multi-divisional strength of between 10,000 and 40,000 men. The victory won by Zulu king Cetawasyo at Ndondakusuka, for example, two decades before the Anglo-Zulu War, involved a deployment of 30,000 troops. These were sizeable formations in regional context but represented the bulk of prime Zulu fighting strength. Few impi-style formations were to routinely achieve this level of mobilisation for a single battle. By comparison, at Cannae, the Romans deployed 80,000 men, and generally could put tens of thousands more into smaller combat actions. The popular notion of countless attacking black spearmen is a distorted one. Manpower supplies on the continent were often limited. In the words of one historian: "The savage hordes of popular lore seldom materialized on African battlefields." This limited resource base would hurt the Zulu when they confronted technologically advanced world powers such as Britain. The advent of new weapons like firearms would also have a profound impact on the African battlefield, but as will be seen, the impi-style forces largely eschewed firearms, or used them in a minor way. Whether facing native spear or European bullet, impis largely fought as they had since the days of Shaka, from Zululand to Zimbabwe, and from Mozambique to Tanzania.
The Zulu had greater numbers than their opponents, but greater numbers massed together in compact arrays simply presented easy targets in the age of modern firearms and artillery. African tribes that fought in smaller guerrilla detachments typically held out against European invaders for a much longer time, as witnessed by the 7-year resistance of the Lobi against the French in West Africa, or the operations of the Berbers in Algeria against the French.
When the Zulu did acquire firearms, most notably captured stocks after the great victory at Isandhlwana, they lacked training and used them ineffectively, consistently firing high to give the bullets "strength." Southern Africa, including the areas near Natal, was teeming with bands like the Griquas who had learned to use guns. Indeed, one such group not only mastered the way of the gun, but became proficient horsemen as well, skills that helped build the Basotho tribe, in what is now the nation of Lesotho. In addition, numerous European renegades or adventurers (both Boer and non-Boer) skilled in firearms were known to the Zulu. Some had even led detachments for the Zulu kings on military missions.
Throughout the 19th century they persisted in "human wave" attacks against well defended European positions where massed firepower devastated their ranks. The ministrations of an isAngoma (plural: izAngoma) Zulu diviner or "witch doctor", and the bravery of individual regiments were ultimately of little use against the volleys of modern rifles, Gatling guns and artillery at the Ineyzane River, Rorke's Drift, Kambula, Gingingdlovu and finally Ulindi.
While the term "impi" has become synonymous with the Zulu nation in international popular culture, it appears in various video games such as Civilization III, Civilization IV: Warlords, Civilization: Revolution, Civilization V: Brave New World, and Civilization VI, where the Impi is the unique unit for the Zulu faction with Shaka as their leader. 'Impi' is also the title of a very famous South Africa song by Johnny Clegg and the band Juluka which has become something of an unofficial national anthem, especially at major international sports events and especially when the opponent is England.
Lyrics:
Before stage seven of the 2013 Tour de France, the Orica–GreenEDGE cycling team played 'Impi' on their team bus in honor of teammate Daryl Impey, the first South African Tour de France leader. | [
{
"paragraph_id": 0,
"text": "Impi is a Nguni word meaning war or combat and by association any body of men gathered for war, for example impi ya masosha is a term denoting an army. Impi were formed from regiments (amabutho) from amakhanda (large militarised homesteads). In English impi is often used to refer to a Zulu regiment, which is called an ibutho in Zulu or the army.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Its beginnings lie far back in historic local warfare customs, when groups of armed men called impi battled. They were systematised radically by the Zulu king Shaka, who was then only the exiled illegitimate son of king Senzangakhona kaJama, but already showing much prowess as a general in the army (impi) of Mthethwa king Dingiswayo in the Ndwandwe–Zulu War of 1817–1819.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Zulu impi is popularly identified with the ascent of Shaka, ruler of the relatively small Zulu tribe before its explosion across the landscape of southern Africa, but its earliest shape as an instrument of statecraft lies in the innovations of the Mthethwa chieftain Dingiswayo, according to some historians (Morris 1965). These innovations in turn drew upon existing tribal customs, such as the iNtanga. This was an age grade tradition common among many of the Bantu peoples of the continent's southern region. Young men were organised into age groups, with each cohort responsible for certain duties and tribal ceremonies. Periodically, the older age grades were summoned to the kraals of sub-chieftains, or inDunas, for consultations, assignments, and an induction ceremony that marked their transition from boys to full-fledged adults and warriors, the ukuButwa. Kraal or settlement elders generally handled local disputes and issues. Above them were the inDunas, and above the inDunas stood the chief of a particular clan lineage or tribe. The inDunas handled administrative matters for their chiefs – ranging from settlement of disputes, to the collection of taxes. In time of war, the inDunas supervised the fighting men in their areas, forming leadership of the military forces deployed for combat. The age grade iNtangas, under the guidance of the inDunas, formed the basis for the systematic regimental organisation that would become known worldwide as the impi.",
"title": "Genesis"
},
{
"paragraph_id": 3,
"text": "Warfare was of low intensity among the KwaZulu Natal tribes prior to the rise of Shaka, though it occurred frequently. Objectives were typically limited to such matters as cattle raiding, avenging some personal insult, or resolving disputes over segments of grazing land. Generally a loose mob, called an impi participated in these melees. There were no campaigns of extermination against the defeated. They simply moved on to other open spaces on the veldt, and equilibrium was restored.",
"title": "Genesis"
},
{
"paragraph_id": 4,
"text": "The bow and arrow were known but seldom used. Warfare, like the hunt, depended on skilled spearmen and trackers. The primary weapon was a thin six-foot (1.8 m) throwing spear, the assegai; several were carried into combat. Defensive weapons included a small cowhide shield, which was later improved by King Shaka. Many battles were prearranged, with the clan warriors meeting at an agreed place and time while women and children of the clan watched from some distance away. Ritualized taunts, single combats and tentative charges were the typical pattern. If the affair did not dissipate before, one side might find enough courage to mount a sustained attack and drive their enemies. Casualties were usually light. The defeated clan might pay in lands or cattle and have captives to be ransomed but extermination and mass casualties were rare. Tactics were rudimentary.",
"title": "Genesis"
},
{
"paragraph_id": 5,
"text": "Outside the ritual battles, the quick raid was the most frequent combat action, marked by burning kraals, seizure of captives, and the driving off of cattle. Pastoral herders and light agriculturalists, the Bantu did not usually build permanent fortifications to fend off enemies. A clan under threat simply packed their meagre material possessions, rounded up their cattle and fled until the marauders were gone. If the marauders did not stay to permanently dispossess them of grazing areas, the fleeing clan might return to rebuild in a day or two. The genesis of the Zulu impi thus lies in tribal structures existing long before the coming of Europeans or the Shaka era.",
"title": "Genesis"
},
{
"paragraph_id": 6,
"text": "In the early 19th century, a combination of factors began to change the customary pattern. These included rising populations, the growth of white settlement and slaving that dispossessed native peoples both at the Cape and in Portuguese Mozambique, and the rise of ambitious \"new men.\" One such man, a warrior called Dingiswayo (the Troubled One) of the Mthethwa rose to prominence. Historians such as Donald Morris hold that his political genius laid the basis for a relatively light hegemony. This was established through a combination of diplomacy and conquest, using not extermination or slavery, but strategic reconciliation and judicious force of arms. This hegemony reduced the frequent feuding and fighting among the small clans in the Mthethwa's orbit, transferring their energies to more centralised forces. Under Dingiswayo the age grades came to be regarded as military drafts, deployed more frequently to maintain the new order. It was from these small clans, including among them the eLangeni and the Zulu, that Shaka sprung.",
"title": "Genesis"
},
{
"paragraph_id": 7,
"text": "Shaka proved himself to be one of Dingiswayo's most able warriors after the military call up of his age grade to serve in the Mthethwa forces. He fought with his iziCwe regiment wherever he was assigned during this early period, but from the beginning, Shaka's approach to battle did not fit the traditional mould. He began to implement his own individual methods and style, designing the famous short stabbing spear the iKlwa, a larger, stronger shield, and discarding the oxhide sandals that he felt slowed him down. These methods proved effective on a small scale, but Shaka himself was restrained by his overlord. His conception of warfare was far more extreme than the reconcilitory methods of Dingiswayo. He sought to bring combat to a swift and bloody decision, as opposed to duels of individual champions, scattered raids, or limited skirmishes where casualties were comparatively light. While his mentor and overlord Dingiswayo lived, Shakan methods were reined in, but the removal of this check gave the Zulu chieftain much broader scope. It was under his rule that a much more rigorous mode of tribal warfare came into being. This newer, brutal focus demanded changes in weapons, organisation and tactics.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 8,
"text": "Shaka is credited with introducing a new variant of the traditional weapon, demoting the long, spindly throwing spear in favour of a heavy-bladed, short-shafted stabbing spear. He is also said to have introduced a larger, heavier cowhide shield (isihlangu), and trained his forces to thus close with the enemy in more effective hand-to-hand combat. The throwing spear was not discarded, but standardised like the stabbing implement and carried as a missile weapon, typically discharged at the foe, before close contact. These weapons changes integrated with and facilitated an aggressive mobility and tactical organisation.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 9,
"text": "As weapons, the Zulu warrior carried the iklwa stabbing spear (losing one could result in execution) and a club or cudgel fashioned from dense hardwood known in Zulu as the iwisa, usually called the knobkerrie or knobkerry in English and knopkierie in Afrikaans, for beating an enemy in the manner of a mace. Zulu officers often carried the half-moon-shaped Zulu ax, but this weapon was more of a symbol to show their rank. The iklwa – so named because of the sucking sound it made when withdrawn from a human body – with its long 25 centimetres (9.8 in) and broad blade was an invention of Shaka that superseded the older thrown ipapa (so named because of the \"pa-pa\" sound it made as it flew through the air). The iklwa could theoretically be used both in melee and as a thrown weapon, but warriors were forbidden in Shaka's day from throwing it, which would disarm them and give their opponents something to throw back. Moreover, Shaka felt it discouraged warriors from closing into hand-to-hand combat.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 10,
"text": "Shaka's brother, and successor, Dingane kaSenzangakhona reintroduced greater use of the throwing spear, perhaps as a counter to Boer firearms.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 11,
"text": "As early as Shaka's reign small numbers of firearms, often obsolete muskets and rifles, were obtained by the Zulus from Europeans by trade. In the aftermath of the defeat of the British at the Battle of Isandlwana in 1879, many Martini–Henry rifles were captured by the Zulus together with considerable amounts of ammunition. The advantage of this capture is debatable due to the alleged tendency of Zulu warriors to close their eyes when firing such weapons. The possession of firearms did little to change Zulu tactics, which continued to rely on a swift approach to the enemy to bring him into close combat.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 12,
"text": "All warriors carried a shield made of oxhide, which retained the hair, with a central stiffening shaft of wood, the mgobo. Shields were the property of the king; they were stored in specialised structures raised off the ground for protection from vermin when not issued to the relevant regiment. The large isihlangu shield of Shaka's day was about five feet in length and was later partially replaced by the smaller umbumbuluzo, a shield of identical manufacture but around three and a half feet in length. Close combat relied on co-ordinated use of the iklwa and shield. The warrior sought to get the edge of his shield behind the edge of his enemy's, so that he could pull the enemy's shield to the side, thus opening him to a thrust with the iklwa deep into the abdomen or chest.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 13,
"text": "The fast-moving host, like all military formations, needed supplies. These were provided by young boys, who were attached to a force and carried rations, cooking pots, sleeping mats, extra weapons and other material. Cattle were sometimes driven on the hoof as a movable larder. Again, such arrangements in the local context were probably nothing unusual. What was different was the systematisation and organisation, a pattern yielding major benefits when the Zulu were dispatched on raiding missions.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 14,
"text": "Age-grade groupings of various sorts were common in the Bantu tribal culture of the day, and indeed are still important in much of Africa. Age grades were responsible for a variety of activities, from guarding the camp, to cattle herding, to certain rituals and ceremonies. It was customary in Zulu culture for young men to provide limited service to their local chiefs until they were married and recognised as official householders. Shaka manipulated this system, transferring the customary service period from the regional clan leaders to himself, strengthening his personal hegemony. Such groupings on the basis of age, did not constitute a permanent, paid military in the modern Western sense, nevertheless they did provide a stable basis for sustained armed mobilisation, much more so than ad hoc tribal levies or war parties.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 15,
"text": "Shaka organised the various age grades into regiments, and quartered them in special military kraals, with each regiment having its own distinctive names and insignia. Some historians argue that the large military establishment was a drain on the Zulu economy and necessitated continual raiding and expansion. This may be true since large numbers of the society's men were isolated from normal occupations, but whatever the resource impact, the regimental system clearly built on existing tribal cultural elements that could be adapted and shaped to fit an expansionist agenda.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 16,
"text": "After their 20th birthdays, young men would be sorted into formal ibutho (plural amabutho) or regiments. They would build their i=handa (often referred to as a 'homestead', as it was basically a stockaded group of huts surrounding a corral for cattle), their gathering place when summoned for active service. Active service continued until a man married, a privilege only the king bestowed. The amabutho were recruited on the basis of age rather than regional or tribal origin. The reason for this was to enhance the centralised power of the Zulu king at the expense of clan and tribal leaders. They swore loyalty to the king of the Zulu nation.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 17,
"text": "Shaka discarded sandals to enable his warriors to run faster. Initially the move was unpopular, but those who objected were simply killed, a practice that quickly concentrated the minds of remaining personnel. Zulu tradition indicates that Shaka hardened the feet of his troops by having them stamp thorny tree and bush branches flat. Shaka drilled his troops frequently, implementing forced marches covering more than fifty miles a day. He also drilled the troops to carry out encirclement tactics (see below). Such mobility gave the Zulu a significant impact in their local region and beyond. Upkeep of the regimental system and training seems to have continued after Shaka's death, although Zulu defeats by the Boers, and growing encroachment by British colonists, sharply curtailed raiding operations prior to the War of 1879. Morris (1965, 1982) records one such mission under King Mpande to give green warriors of the uThulwana regiment experience: a raid into Swaziland, dubbed \"Fund' uThulwana\" by the Zulu, or \"Teach the uThulwana\".",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 18,
"text": "Impi warriors were trained as early as age six, joining the army as udibi porters at first, being enrolled into same-age groups (intanga). Until they were buta'd, Zulu boys accompanied their fathers and brothers on campaign as servants. Eventually, they would go to the nearest ikhanda to kleza (literally, \"to drink directly from the udder\"), at which time the boys would become inkwebane, cadets. They would spend their time training until they were formally enlisted by the king. They would challenge each other to stick fights, which had to be accepted on pain of dishonor.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 19,
"text": "In Shaka's day, warriors often wore elaborate plumes and cow tail regalia in battle, but by the Anglo-Zulu War of 1879, many warriors wore only a loin cloth and a minimal form of headdress. The later period Zulu soldier went into battle relatively simply dressed, painting his upper body and face with chalk and red ochre, despite the popular conception of elaborately panoplied warriors. Each ibutho had a singular arrangement of headdress and other adornments, so that the Zulu army could be said to have had regimental uniforms; latterly the 'full-dress' was only worn on festive occasions. The men of senior regiments would wear, in addition to their other headdress, the head-ring (isicoco) denoting their married state. A gradation of shield colour was found, junior regiments having largely dark shields the more senior ones having shields with more light colouring; Shaka's personal regiment Fasimba (The Haze) having white shields with only a small patch of darker colour. This shield uniformity was facilitated by the custom of separating the king's cattle into herds based on their coat colours.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 20,
"text": "Certain adornments were awarded to individual warriors for conspicuous courage in action; these included a type of heavy brass arm-ring (ingxotha) and an intricate necklace composed of interlocking wooden pegs (iziqu).",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 21,
"text": "The Zulu typically took the offensive, deploying in the well known \"buffalo horns\" formation. The attack layout was composed of four elements, each of which represented a grouping of Zulu regiments:",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 22,
"text": "Encirclement tactics were not unique in the region and attempts to surround an enemy were not unknown even in the ritualised battles. The use of separate manoeuvre elements to support a stronger central group was also known in pre-mechanised tribal warfare, as is the use of reserve echelons farther back. What was unique about the Zulu was the degree of organisation, consistency with which they used these tactics, and the speed at which they executed them. Developments and refinements may have taken place after Shaka's death, as witnessed by the use of larger groupings of regiments by the Zulu against the British in 1879. Missions, available manpower and enemies varied, but whether facing native spear, or European bullet, the impis generally fought in and adhered to the classical buffalo horns pattern.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 23,
"text": "Organization. The Zulu forces were generally grouped into 3 levels: regiments, corps of several regiments, and \"armies\" or bigger formations, although the Zulu did not use these terms in the modern sense. Size distinctions were taken account of, any grouping of men on a mission could collectively be called an impi, whether a raiding party of 100 or horde of 10,000. Numbers were not uniform, but dependent on a variety of factors including assignments by the king, or the manpower mustered by various clan chiefs or localities. A regiment might be 400 or 4000 men. These were grouped into Corps that took their name from the military kraals where they were mustered, or sometimes the dominant regiment of that locality. While the modest Zulu population could not turn out the hundreds of thousand available to major world or continental powers like France, Britain, or Russia, the Zulu \"nation in arms\" approach could mobilize substantial forces in local context for short campaigns, and maneuver them in the Western equivalent of divisional strength. The victory won by Zulu king Cetshwayo at Ndondakusuka, for example, two decades before the Anglo-Zulu War of 1879, involved a battlefield deployment of 30,000 troops.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 24,
"text": "Higher command and unit leadership. An inDuna guided each regiment, and he in turn answered to senior izinduna who controlled the corps grouping. Overall guidance of the host was furnished by elder izinduna usually with many years of experience. One or more of these elder chiefs might accompany a big force on an important mission. Coordination of tactical movements was supplied by the indunas who used hand signals and messengers. Generally before deploying for battle, the regiments were made to squat in a semicircle while these commanders made final assignments and adjustments. Lower level regimental izinduna, like the NCOs of today's armies, and yesterday's Roman centurions, were extremely important to morale and discipline. Prior to the clash at Isandhlwana for example, they imposed order on the frenzied rush of warriors eager to get at the British, and steadied those faltering under withering enemy fire during the battle. The widely spaced maneuvers of an impi sometimes could make control problematic once an attack was unleashed. Indeed, the Zulu attacks on the British strongpoints at Rorke's Drift and at Kambula, (both bloody defeats) seemed to have been carried out by over-enthusiastic leaders and warriors despite contrary orders of the Zulu King, Cetshwayo. Such over-confidence or disobedience by thrusting leaders or forces is not unusual in warfare. At the Battle of Trebia for example, the over-confident Roman commander Sempronius was provoked into a hasty attack, that resulted in a defeat for Roman arms. Likewise, General George Custer disobeyed the orders of his superior, General Terry, and rashly launched a disastrous charge against Indian forces at the Battle of the Little Bighorn, resulting in the total destruction of his command. Popular film re-enactments display a grizzled izinduna directing the Zulu host from a promontory with elegant sweeps of the hand, and the reserves still lay within top commanders' overall control. Coordination after an army was set in motion however relied more on the initial pre-positioning and assignments of the regiments before the advance, and the deep understanding by Zulu officers of the general attack plan. These sub-commanders could thus slow down or speed up their approach runs to maintain the general \"buffalo horns\" alignment to match terrain and situation.",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 25,
"text": "As noted above, Shaka was neither the originator of the impi, or the age grade structure, nor the concept of a bigger grouping than the small clan system. His major innovations were to blend these traditional elements in a new way, to systematise the approach to battle, and to standardise organization, methods and weapons, particularly in his adoption of the ilkwa – the Zulu thrusting spear, unique long-term regimental units, and the \"buffalo horns\" formation. Dingswayo's approach was of a loose federation of allies under his hegemony, combining to fight, each with their own contingents, under their own leaders. Shaka dispensed with this, insisting instead on a standardised organisation and weapons package that swept away and replaced old clan allegiances with loyalty to himself. This uniform approach also encouraged the loyalty and identification of warriors with their own distinctive military regiments. In time, these warriors, from many conquered tribes and clans came to regard themselves as one nation- the Zulu. The so-called Marian reforms of Rome in the military sphere are referenced by some writers as similar. While other ancient powers such as the Carthaginians maintained a patchwork of force types, and the legions retained such phalanx-style holdovers like the triarii, later writers would attribute to Marius the implementation of one consistent standardised approach for all the infantry that likely actually took place gradually across many years. This enabled more disciplined formations and efficient execution of tactics over time against a variety of enemies. As one military historian notes:",
"title": "Ascent and innovations of Shaka"
},
{
"paragraph_id": 26,
"text": "To understand the full scope of the impi's performance in battle, military historians of the Zulu typically look to its early operations against internal African enemies, not merely the British interlude. In terms of numbers, the operations of the impi would change—from the Western equivalent of small company and battalion size forces, to manoeuvres in multi-divisional strength of between 10,000 and 40,000 men. The victory won by Zulu king Cetawasyo at Ndondakusuka, for example, two decades before the Anglo-Zulu War, involved a deployment of 30,000 troops. These were sizeable formations in regional context but represented the bulk of prime Zulu fighting strength. Few impi-style formations were to routinely achieve this level of mobilisation for a single battle. By comparison, at Cannae, the Romans deployed 80,000 men, and generally could put tens of thousands more into smaller combat actions. The popular notion of countless attacking black spearmen is a distorted one. Manpower supplies on the continent were often limited. In the words of one historian: \"The savage hordes of popular lore seldom materialized on African battlefields.\" This limited resource base would hurt the Zulu when they confronted technologically advanced world powers such as Britain. The advent of new weapons like firearms would also have a profound impact on the African battlefield, but as will be seen, the impi-style forces largely eschewed firearms, or used them in a minor way. Whether facing native spear or European bullet, impis largely fought as they had since the days of Shaka, from Zululand to Zimbabwe, and from Mozambique to Tanzania.",
"title": "In battle"
},
{
"paragraph_id": 27,
"text": "The Zulu had greater numbers than their opponents, but greater numbers massed together in compact arrays simply presented easy targets in the age of modern firearms and artillery. African tribes that fought in smaller guerrilla detachments typically held out against European invaders for a much longer time, as witnessed by the 7-year resistance of the Lobi against the French in West Africa, or the operations of the Berbers in Algeria against the French.",
"title": "In battle"
},
{
"paragraph_id": 28,
"text": "When the Zulu did acquire firearms, most notably captured stocks after the great victory at Isandhlwana, they lacked training and used them ineffectively, consistently firing high to give the bullets \"strength.\" Southern Africa, including the areas near Natal, was teeming with bands like the Griquas who had learned to use guns. Indeed, one such group not only mastered the way of the gun, but became proficient horsemen as well, skills that helped build the Basotho tribe, in what is now the nation of Lesotho. In addition, numerous European renegades or adventurers (both Boer and non-Boer) skilled in firearms were known to the Zulu. Some had even led detachments for the Zulu kings on military missions.",
"title": "In battle"
},
{
"paragraph_id": 29,
"text": "Throughout the 19th century they persisted in \"human wave\" attacks against well defended European positions where massed firepower devastated their ranks. The ministrations of an isAngoma (plural: izAngoma) Zulu diviner or \"witch doctor\", and the bravery of individual regiments were ultimately of little use against the volleys of modern rifles, Gatling guns and artillery at the Ineyzane River, Rorke's Drift, Kambula, Gingingdlovu and finally Ulindi.",
"title": "In battle"
},
{
"paragraph_id": 30,
"text": "While the term \"impi\" has become synonymous with the Zulu nation in international popular culture, it appears in various video games such as Civilization III, Civilization IV: Warlords, Civilization: Revolution, Civilization V: Brave New World, and Civilization VI, where the Impi is the unique unit for the Zulu faction with Shaka as their leader. 'Impi' is also the title of a very famous South Africa song by Johnny Clegg and the band Juluka which has become something of an unofficial national anthem, especially at major international sports events and especially when the opponent is England.",
"title": "In popular culture"
},
{
"paragraph_id": 31,
"text": "Lyrics:",
"title": "In popular culture"
},
{
"paragraph_id": 32,
"text": "Before stage seven of the 2013 Tour de France, the Orica–GreenEDGE cycling team played 'Impi' on their team bus in honor of teammate Daryl Impey, the first South African Tour de France leader.",
"title": "In popular culture"
}
]
| Impi is a Nguni word meaning war or combat and by association any body of men gathered for war, for example impi ya masosha is a term denoting an army. Impi were formed from regiments (amabutho) from amakhanda. In English impi is often used to refer to a Zulu regiment, which is called an ibutho in Zulu or the army. Its beginnings lie far back in historic local warfare customs, when groups of armed men called impi battled. They were systematised radically by the Zulu king Shaka, who was then only the exiled illegitimate son of king Senzangakhona kaJama, but already showing much prowess as a general in the army (impi) of Mthethwa king Dingiswayo in the Ndwandwe–Zulu War of 1817–1819. | 2001-10-19T23:23:32Z | 2023-11-22T18:49:52Z | [
"Template:Cite book",
"Template:Cite tweet",
"Template:ISBN",
"Template:Use British English",
"Template:Lang",
"Template:Convert",
"Template:Reflist",
"Template:Cite news",
"Template:Short description",
"Template:Other uses",
"Template:Use dmy dates",
"Template:Main article"
]
| https://en.wikipedia.org/wiki/Impi |
15,175 | Irish mythology | Irish mythology is the body of myths native to the island of Ireland. It was originally passed down orally in the prehistoric era, being part of ancient Celtic religion. Many myths were later written down in the early medieval era by Christian scribes, who modified and Christianized them to some extent. This body of myths is the largest and best preserved of all the branches of Celtic mythology. The tales and themes continued to be developed over time, and the oral tradition continued in Irish folklore alongside the written tradition, but the main themes and characters remained largely consistent.
The myths are conventionally grouped into 'cycles'. The Mythological Cycle consists of tales and poems about the god-like Túatha Dé Danann, who are based on Ireland's pagan deities, and other mythical races like the Fomorians. Important works in the cycle are the Lebor Gabála Érenn ("Book of Invasions"), a legendary history of Ireland, the Cath Maige Tuired ("Battle of Moytura"), and the Aided Chlainne Lir ("Children of Lir"). The Ulster Cycle consists of heroic legends relating to the Ulaid, the most important of which is the epic Táin Bó Cúailnge ("Cattle Raid of Cooley"). The Fianna Cycle focuses on the exploits of the mythical hero Finn and his warrior band the Fianna, including the lengthy Acallam na Senórach ("Tales of the Elders"). The Kings' Cycle comprises legends about historical and semi-historical kings of Ireland (such as Buile Shuibhne, "The Madness of King Sweeny"), and tales about the origins of dynasties and peoples.
There are also mythical texts that do not fit into any of the cycles; these include the echtrai tales of journeys to the Otherworld (such as The Voyage of Bran), and the Dindsenchas ("lore of places"). Some written material has not survived, and many more myths were probably never written down.
The main supernatural beings in Irish mythology are the Túatha Dé Danann ("the folk of the goddess Danu"), also known by the earlier name Túath Dé ("god folk" or "tribe of the gods"). Early medieval Irish writers also called them the fir dé (god-men) and cenéla dé (god-kindreds), possibly to avoid calling them simply 'gods'. They are often depicted as kings, queens, bards, warriors, heroes, healers and craftsmen who have supernatural powers and are immortal. Prominent members include The Dagda ("the great god"); The Morrígan ("the great queen" or "phantom queen"); Lugh; Nuada; Aengus; Brigid; Manannán; Dian Cécht the healer; and Goibniu the smith. They are also said to control the fertility of the land; the tale De Gabáil in t-Sída says the first Gaels had to establish friendship with the Túath Dé before they could raise crops and herds.
They dwell in the Otherworld but interact with humans and the human world. Many are associated with specific places in the landscape, especially the sídhe: prominent ancient burial mounds such as Brú na Bóinne, which are entrances to Otherworld realms. The Túath Dé can hide themselves with a féth fíada ('magic mist'). They are said to have travelled from the north of the world, but then were forced to live underground in the sídhe after the coming of the Irish.
In some tales, such as Baile in Scáil, kings receive affirmation of their legitimacy from one of the Túath Dé, or a king's right to rule is affirmed by an encounter with an otherworldly woman (see sovereignty goddess). The Túath Dé can also bring doom to unrightful kings.
The medieval writers who wrote about the Túath Dé were Christians. Sometimes they explained the Túath Dé as fallen angels; neutral angels who sided neither with God nor Lucifer and were punished by being forced to dwell on the Earth; or ancient humans who had become highly skilled in magic. However, several writers acknowledged that at least some of them had been gods.
There is strong evidence that many of the Túath Dé represent the gods of Irish paganism. The name itself means "tribe of gods", and the ninth-century Scél Tuain meic Cairill (Tale of Tuan mac Cairill) speaks of the Túath Dé ocus Andé, "tribe of gods and un-gods". Goibniu, Credne and Luchta are called the trí dé dáno, "three gods of craft". In Sanas Cormaic (Cormac's Glossary), Anu is called "mother of the Irish gods", Nét a "god of war", and Brigid a "goddess of poets". Writing in the seventh century, Tírechán explained the sídh folk as "earthly gods" (Latin dei terreni), while Fiacc's Hymn says the Irish adored the sídh before the coming of Saint Patrick. Several of the Tuath Dé are cognate with ancient Celtic deities: Lugh with Lugus, Brigid with Brigantia, Nuada with Nodons, and Ogma with Ogmios.
Nevertheless, John Carey notes that it is not wholly accurate to describe all of them as gods in the medieval literature itself. He argues that the literary Túath Dé are sui generis, and suggests "immortals" might be a more neutral term.
Many of the Túath Dé are not defined by singular qualities, but are more of the nature of well-rounded humans, who have areas of special interests or skills like the druidic arts they learned before traveling to Ireland. In this way, they do not correspond directly to other pantheons such as those of the Greeks or Romans.
Irish goddesses or Otherworldly women are usually connected to the land, the waters, and sovereignty, and are often seen as the oldest ancestors of the people in the region or nation. They are maternal figures caring for the earth itself as well as their descendants, but also fierce defenders, teachers and warriors. The goddess Brigid is linked with poetry, healing, and smithing. Another is the Cailleach, said to have lived many lives that begin and end with her in stone formation. She is still celebrated at Ballycrovane Ogham Stone with offerings and the retelling of her life's stories. The tales of the Cailleach connect her to both land and sea. Several Otherworldly women are associated with sacred sites where seasonal festivals are held. They include Macha of Eamhain Mhacha, Carman, and Tailtiu, among others.
Warrior goddesses are often depicted as a triad and connected with sovereignty and sacred animals. They guard the battlefield and those who do battle, and according to the stories in the Táin Bó Cúailnge, some of them may instigate and direct war themselves. The main goddesses of battle are The Morrígan, Macha, and Badb. Other warrior women are seen in the role of training warriors in the Fianna bands, such as Liath Luachra, one of the women who trained the hero Fionn mac Cumhaill. Zoomorphism is an important feature. Badb Catha, for instance, is "the Raven of Battle", and in the Táin Bó Cúailnge, The Morrígan shapeshifts into an eel, a wolf, and a cow.
Irish gods are divided into four main groups. Group one encompasses the older gods of Gaul and Britain. The second group is the main focus of much of the mythology and surrounds the native Irish gods with their homes in burial mounds. The third group are the gods that dwell in the sea and the fourth group includes stories of the Otherworld. The gods that appear most often are the Dagda and Lugh. Some scholars have argued that the stories of these gods align with Greek stories and gods.
The Fomorians or Fomori (Old Irish: Fomóire) are a supernatural race, who are often portrayed as hostile and monstrous beings. Originally, they were said to come from under the sea or the earth. Later, they were portrayed as sea raiders, which was probably influenced by the Viking raids on Ireland around that time. Later still they were portrayed as giants. They are enemies of Ireland's first settlers and opponents of the Tuatha Dé Danann, although some members of the two races have offspring. The Fomorians were viewed as the alter-egos to the Túath Dé The Túath Dé defeat the Fomorians in the Battle of Mag Tuired. This has been likened to other Indo-European myths of a war between gods, such as the Æsir and Vanir in Norse mythology and the Olympians and Titans in Greek mythology.
Heroes in Irish mythology can be found in two distinct groups. There is the lawful hero who exists within the boundaries of the community, protecting their people from outsiders. Within the kin-group or túath, heroes are human and gods are not.
The Fianna warrior bands are seen as outsiders, connected with the wilderness, youth, and liminal states. Their leader was called Fionn mac Cumhaill, and the first stories of him are told in fourth century. They are considered aristocrats and outsiders who protect the community from other outsiders; though they may winter with a settled community, they spend the summers living wild, training adolescents and providing a space for war-damaged veterans. The time of vagrancy for these youths is designated as a transition in life post puberty but pre-manhood. Manhood being identified as owning or inheriting property. They live under the authority of their own leaders, or may be somewhat anarchic, and may follow other deities or spirits than the settled communities.
The church refused to recognize this group as an institution and referred to them as "sons of death".
The Oilliphéist is a sea-serpent-like monster in Irish mythology and folklore. These monsters were believed to inhabit many lakes and rivers in Ireland and there are legends of saints, especially St. Patrick, and heroes fighting them.
The three main manuscript sources for Irish mythology are the late 11th/early 12th century Lebor na hUidre (Book of the Dun Cow), which is in the library of the Royal Irish Academy, and is the oldest surviving manuscript written entirely in the Irish language; the early 12th-century Book of Leinster, which is in the Library of Trinity College Dublin; and Bodleian Library, MS Rawlinson B 502 (Rawl.), which is in the Bodleian Library at the University of Oxford. Despite the dates of these sources, most of the material they contain predates their composition.
Other important sources include a group of manuscripts that originated in the West of Ireland in the late 14th century or the early 15th century: The Yellow Book of Lecan, The Great Book of Lecan and The Book of Ballymote. The first of these is in the Library of Trinity College and the others are in the Royal Irish Academy. The Yellow Book of Lecan is composed of sixteen parts and includes the legends of Fionn Mac Cumhail, selections of legends of Irish Saints, and the earliest known version of the Táin Bó Cúailnge ("The Cattle Raid of Cooley"). This is one of Europe's oldest epics written in a vernacular language. Other 15th-century manuscripts, such as The Book of Fermoy, also contain interesting materials, as do such later syncretic works such as Geoffrey Keating's Foras Feasa ar Éirinn (The History of Ireland) (c. 1640). These later compilers and writers may well have had access to manuscript sources that have since disappeared.
Most of these manuscripts were created by Christian monks, who may well have been torn between a desire to record their native culture and hostility to pagan beliefs, resulting in some of the gods being euhemerised. Many of the later sources may also have formed parts of a propaganda effort designed to create a history for the people of Ireland that could bear comparison with the mythological descent of their British invaders from the founders of Rome, as promulgated by Geoffrey of Monmouth and others. There was also a tendency to rework Irish genealogies to fit them into the schemas of Greek or biblical genealogy.
Whether medieval Irish literature provides reliable evidence of oral tradition remains a matter for debate. Kenneth Jackson described the Ulster Cycle as a "window on the Iron Age", and Garret Olmsted has attempted to draw parallels between Táin Bó Cuailnge, the Ulster Cycle epic and the iconography of the Gundestrup Cauldron. However, these "nativist" claims have been challenged by "revisionist" scholars who believe that much of the literature was created, rather than merely recorded, in Christian times, more or less in imitation of the epics of classical literature that came with Latin learning. The revisionists point to passages apparently influenced by the Iliad in Táin Bó Cuailnge, and to the Togail Troí, an Irish adaptation of Dares Phrygius' De excidio Troiae historia, found in the Book of Leinster. They also argue that the material culture depicted in the stories is generally closer to that of the time of their composition than to that of the distant past.
The Mythological Cycle, comprising stories of the former gods and origins of the Irish, is the least well preserved of the four cycles. It is about the principal people who invaded and inhabited the island. The people include Cessair and her followers, the Formorians, the Partholinians, the Nemedians, the Firbolgs, the Tuatha Dé Danann, and the Milesians. The most important sources are the Metrical Dindshenchas or Lore of Places and the Lebor Gabála Érenn or Book of Invasions. Other manuscripts preserve such mythological tales as The Dream of Aengus, the Wooing Of Étain and Cath Maige Tuireadh, the (second) Battle of Magh Tuireadh. One of the best known of all Irish stories, Oidheadh Clainne Lir, or The Tragedy of the Children of Lir, is also part of this cycle.
Lebor Gabála Érenn is a pseudo-history of Ireland, tracing the ancestry of the Irish back to before Noah. It tells of a series of invasions or "takings" of Ireland by a succession of peoples, the fifth of whom was the people known as the Túatha Dé Danann ("Peoples of the Goddess Danu"), who were believed to have inhabited the island before the arrival of the Gaels, or Milesians. They faced opposition from their enemies, the Fomorians, led by Balor of the Evil Eye. Balor was eventually slain by Lugh Lámfada (Lugh of the Long Arm) at the second battle of Magh Tuireadh. With the arrival of the Gaels, the Túatha Dé Danann retired underground to become the fairy people of later myth and legend.
The Metrical Dindshenchas is the great onomastics work of early Ireland, giving the naming legends of significant places in a sequence of poems. It includes a lot of important information on Mythological Cycle figures and stories, including the Battle of Tailtiu, in which the Túatha Dé Danann were defeated by the Milesians.
It is important to note that by the Middle Ages the Túatha Dé Danann were not viewed so much as gods as the shape-shifting magician population of an earlier Golden Age Ireland. Texts such as Lebor Gabála Érenn and Cath Maige Tuireadh present them as kings and heroes of the distant past, complete with death-tales. However, there is considerable evidence, both in the texts and from the wider Celtic world, that they were once considered deities.
Even after they are displaced as the rulers of Ireland, characters such as Lugh, the Mórrígan, Aengus and Manannán Mac Lir appear in stories set centuries later, betraying their immortality. A poem in the Book of Leinster lists many of the Túatha Dé, but ends "Although [the author] enumerates them, he does not worship them". Goibniu, Creidhne and Luchta are referred to as Trí Dé Dána ("three gods of craftsmanship"), and the Dagda's name is interpreted in medieval texts as "the good god". Nuada is cognate with the British god Nodens; Lugh is a reflex of the pan-Celtic deity Lugus, the name of whom may indicate "Light"; Tuireann may be related to the Gaulish Taranis; Ogma to Ogmios; the Badb to Catubodua.
The Ulster Cycle is traditionally set around the first century AD, and most of the action takes place in the provinces of Ulster and Connacht. It consists of a group of heroic tales dealing with the lives of Conchobar mac Nessa, king of Ulster, the great hero Cú Chulainn, who was the son of Lug (Lugh), and of their friends, lovers, and enemies. These are the Ulaid, or people of the North-Eastern corner of Ireland and the action of the stories centres round the royal court at Emain Macha (known in English as Navan Fort), close to the modern town of Armagh. The Ulaid had close links with the Irish colony in Scotland, and part of Cú Chulainn's training takes place in that colony.
The cycle consists of stories of the births, early lives and training, wooing, battles, feastings, and deaths of the heroes. It also reflects a warrior society in which warfare consists mainly of single combats and wealth is measured mainly in cattle. These stories are written mainly in prose. The centerpiece of the Ulster Cycle is the Táin Bó Cúailnge. Other important Ulster Cycle tales include The Tragic Death of Aife's only Son, Bricriu's Feast, and The Destruction of Da Derga's Hostel. The Exile of the Sons of Usnach, better known as the tragedy of Deirdre and the source of plays by John Millington Synge, William Butler Yeats, and Vincent Woods, is also part of this cycle.
This cycle is, in some respects, close to the mythological cycle. Some of the characters from the latter reappear, and the same sort of shape-shifting magic is much in evidence, side by side with a grim, almost callous realism. While we may suspect a few characters, such as Medb or Cú Roí, of once being deities, and Cú Chulainn in particular displays superhuman prowess, the characters are mortal and associated with a specific time and place. If the Mythological Cycle represents a Golden Age, the Ulster Cycle is Ireland's Heroic Age.
Like the Ulster Cycle, the Fianna Cycle or Fenian Cycle, also referred to as the Ossianic Cycle, is concerned with the deeds of Irish heroes. The stories of the Cycle appear to be set around the 3rd century and mainly in the provinces of Leinster and Munster. They differ from the other cycles in the strength of their links with the Gaelic-speaking community in Scotland and there are many extant texts from that country. They also differ from the Ulster Cycle in that the stories are told mainly in verse and that in tone they are nearer to the tradition of romance than the tradition of epic. The stories concern the doings of Fionn mac Cumhaill and his band of soldiers, the Fianna.
The single most important source for the Fianna Cycle is the Acallam na Senórach (Colloquy of the Old Men), which is found in two 15th century manuscripts, the Book of Lismore and Laud 610, as well as a 17th century manuscript from Killiney, County Dublin. The text is dated from linguistic evidence to the 12th century. The text records conversations between Caílte mac Rónáin and Oisín, the last surviving members of the Fianna, and Saint Patrick, and consists of about 8,000 lines. The late dates of the manuscripts may reflect a longer oral tradition for the Fenian stories.
The Fianna of the story are divided into the Clann Baiscne, led by Fionn mac Cumhaill (often rendered as "Finn MacCool", Finn Son of Cumhall), and the Clann Morna, led by his enemy, Goll mac Morna. Goll killed Fionn's father, Cumhal, in battle and the boy Fionn was brought up in secrecy. As a youth, while being trained in the art of poetry, he accidentally burned his thumb while cooking the Salmon of Knowledge, which allowed him to suck or bite his thumb to receive bursts of stupendous wisdom. He took his place as the leader of his band and numerous tales are told of their adventures. Two of the greatest of the Irish tales, Tóraigheacht Dhiarmada agus Ghráinne (The Pursuit of Diarmuid and Gráinne) and Oisín in Tír na nÓg form part of the cycle. The Diarmuid and Grainne story, which is one of the cycle's few prose tales, is a probable source of Tristan and Iseult.
The world of the Fianna Cycle is one in which professional warriors spend their time hunting, fighting, and engaging in adventures in the spirit world. New entrants into the band are expected to be knowledgeable in poetry as well as undergo a number of physical tests or ordeals. Most of the poems are attributed to being composed by Oisín. This cycle creates a bridge between pre-Christian and Christian times.
It was part of the duty of the medieval Irish bards, or court poets, to record the history of the family and the genealogy of the king they served. This they did in poems that blended the mythological and the historical to a greater or lesser degree. The resulting stories from what has come to be known as the Cycle of the Kings, or more correctly Cycles, as there are a number of independent groupings. This term is a more recent addition to the cycles, with it being coined in 1946 by Irish literary critic Myles Dillon.
The kings that are included range from the almost entirely mythological Labraid Loingsech, who allegedly became High King of Ireland around 431 BC, to the entirely historical Brian Boru. However, the greatest glory of the Kings' Cycle is the Buile Shuibhne (The Frenzy of Sweeney), a 12th century tale told in verse and prose. Suibhne, king of Dál nAraidi, was cursed by St. Ronan and became a kind of half-man, half bird, condemned to live out his life in the woods, fleeing from his human companions. The story has captured the imaginations of contemporary Irish poets and has been translated by Trevor Joyce and Seamus Heaney.
The adventures, or echtrae, are a group of stories of visits to the Irish Other World (which may be westward across the sea, underground, or simply invisible to mortals). The most famous, Oisin in Tir na nÓg belongs to the Fenian Cycle, but several free-standing adventures survive, including The Adventure of Conle, The Voyage of Bran mac Ferbail, and The Adventure of Lóegaire.
The voyages, or immrama, are tales of sea journeys and the wonders seen on them that may have resulted from the combination of the experiences of fishermen combined and the Other World elements that inform the adventures. Of the seven immrama mentioned in the manuscripts, only three have survived: The Voyage of Máel Dúin, the Voyage of the Uí Chorra, and the Voyage of Snedgus and Mac Riagla. The Voyage of Mael Duin is the forerunner of the later Voyage of St. Brendan. While not as ancient, later 8th century AD works, that influenced European literature, include The Vision of Adamnán.
Although there are no written sources of Irish mythology, many stories are passed down orally through traditional storytelling. Some of these stories have been lost, but some Celtic regions continue to tell folktales to the modern-day. Folktales and stories were primarily preserved by monastic scribes from the bards of nobility. Once the noble houses started to decline, this tradition was put to an abrupt end. The bards passed the stories to their families, and the families would take on the oral tradition of storytelling.
During the first few years of the 20th century, Herminie T. Kavanagh wrote down many Irish folk tales, which she published in magazines and in two books. Twenty-six years after her death, the tales from her two books, Darby O'Gill and the Good People and Ashes of Old Wishes, were made into the film Darby O'Gill and the Little People. Noted Irish playwright Lady Gregory also collected folk stories to preserve Irish history. The Irish Folklore Commission gathered folk tales from the general Irish populace from 1935 onward.
Primary sources in English translation
Primary sources in Medieval Irish
Secondary sources | [
{
"paragraph_id": 0,
"text": "Irish mythology is the body of myths native to the island of Ireland. It was originally passed down orally in the prehistoric era, being part of ancient Celtic religion. Many myths were later written down in the early medieval era by Christian scribes, who modified and Christianized them to some extent. This body of myths is the largest and best preserved of all the branches of Celtic mythology. The tales and themes continued to be developed over time, and the oral tradition continued in Irish folklore alongside the written tradition, but the main themes and characters remained largely consistent.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The myths are conventionally grouped into 'cycles'. The Mythological Cycle consists of tales and poems about the god-like Túatha Dé Danann, who are based on Ireland's pagan deities, and other mythical races like the Fomorians. Important works in the cycle are the Lebor Gabála Érenn (\"Book of Invasions\"), a legendary history of Ireland, the Cath Maige Tuired (\"Battle of Moytura\"), and the Aided Chlainne Lir (\"Children of Lir\"). The Ulster Cycle consists of heroic legends relating to the Ulaid, the most important of which is the epic Táin Bó Cúailnge (\"Cattle Raid of Cooley\"). The Fianna Cycle focuses on the exploits of the mythical hero Finn and his warrior band the Fianna, including the lengthy Acallam na Senórach (\"Tales of the Elders\"). The Kings' Cycle comprises legends about historical and semi-historical kings of Ireland (such as Buile Shuibhne, \"The Madness of King Sweeny\"), and tales about the origins of dynasties and peoples.",
"title": ""
},
{
"paragraph_id": 2,
"text": "There are also mythical texts that do not fit into any of the cycles; these include the echtrai tales of journeys to the Otherworld (such as The Voyage of Bran), and the Dindsenchas (\"lore of places\"). Some written material has not survived, and many more myths were probably never written down.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The main supernatural beings in Irish mythology are the Túatha Dé Danann (\"the folk of the goddess Danu\"), also known by the earlier name Túath Dé (\"god folk\" or \"tribe of the gods\"). Early medieval Irish writers also called them the fir dé (god-men) and cenéla dé (god-kindreds), possibly to avoid calling them simply 'gods'. They are often depicted as kings, queens, bards, warriors, heroes, healers and craftsmen who have supernatural powers and are immortal. Prominent members include The Dagda (\"the great god\"); The Morrígan (\"the great queen\" or \"phantom queen\"); Lugh; Nuada; Aengus; Brigid; Manannán; Dian Cécht the healer; and Goibniu the smith. They are also said to control the fertility of the land; the tale De Gabáil in t-Sída says the first Gaels had to establish friendship with the Túath Dé before they could raise crops and herds.",
"title": "Figures"
},
{
"paragraph_id": 4,
"text": "They dwell in the Otherworld but interact with humans and the human world. Many are associated with specific places in the landscape, especially the sídhe: prominent ancient burial mounds such as Brú na Bóinne, which are entrances to Otherworld realms. The Túath Dé can hide themselves with a féth fíada ('magic mist'). They are said to have travelled from the north of the world, but then were forced to live underground in the sídhe after the coming of the Irish.",
"title": "Figures"
},
{
"paragraph_id": 5,
"text": "In some tales, such as Baile in Scáil, kings receive affirmation of their legitimacy from one of the Túath Dé, or a king's right to rule is affirmed by an encounter with an otherworldly woman (see sovereignty goddess). The Túath Dé can also bring doom to unrightful kings.",
"title": "Figures"
},
{
"paragraph_id": 6,
"text": "The medieval writers who wrote about the Túath Dé were Christians. Sometimes they explained the Túath Dé as fallen angels; neutral angels who sided neither with God nor Lucifer and were punished by being forced to dwell on the Earth; or ancient humans who had become highly skilled in magic. However, several writers acknowledged that at least some of them had been gods.",
"title": "Figures"
},
{
"paragraph_id": 7,
"text": "There is strong evidence that many of the Túath Dé represent the gods of Irish paganism. The name itself means \"tribe of gods\", and the ninth-century Scél Tuain meic Cairill (Tale of Tuan mac Cairill) speaks of the Túath Dé ocus Andé, \"tribe of gods and un-gods\". Goibniu, Credne and Luchta are called the trí dé dáno, \"three gods of craft\". In Sanas Cormaic (Cormac's Glossary), Anu is called \"mother of the Irish gods\", Nét a \"god of war\", and Brigid a \"goddess of poets\". Writing in the seventh century, Tírechán explained the sídh folk as \"earthly gods\" (Latin dei terreni), while Fiacc's Hymn says the Irish adored the sídh before the coming of Saint Patrick. Several of the Tuath Dé are cognate with ancient Celtic deities: Lugh with Lugus, Brigid with Brigantia, Nuada with Nodons, and Ogma with Ogmios.",
"title": "Figures"
},
{
"paragraph_id": 8,
"text": "Nevertheless, John Carey notes that it is not wholly accurate to describe all of them as gods in the medieval literature itself. He argues that the literary Túath Dé are sui generis, and suggests \"immortals\" might be a more neutral term.",
"title": "Figures"
},
{
"paragraph_id": 9,
"text": "Many of the Túath Dé are not defined by singular qualities, but are more of the nature of well-rounded humans, who have areas of special interests or skills like the druidic arts they learned before traveling to Ireland. In this way, they do not correspond directly to other pantheons such as those of the Greeks or Romans.",
"title": "Figures"
},
{
"paragraph_id": 10,
"text": "Irish goddesses or Otherworldly women are usually connected to the land, the waters, and sovereignty, and are often seen as the oldest ancestors of the people in the region or nation. They are maternal figures caring for the earth itself as well as their descendants, but also fierce defenders, teachers and warriors. The goddess Brigid is linked with poetry, healing, and smithing. Another is the Cailleach, said to have lived many lives that begin and end with her in stone formation. She is still celebrated at Ballycrovane Ogham Stone with offerings and the retelling of her life's stories. The tales of the Cailleach connect her to both land and sea. Several Otherworldly women are associated with sacred sites where seasonal festivals are held. They include Macha of Eamhain Mhacha, Carman, and Tailtiu, among others.",
"title": "Figures"
},
{
"paragraph_id": 11,
"text": "Warrior goddesses are often depicted as a triad and connected with sovereignty and sacred animals. They guard the battlefield and those who do battle, and according to the stories in the Táin Bó Cúailnge, some of them may instigate and direct war themselves. The main goddesses of battle are The Morrígan, Macha, and Badb. Other warrior women are seen in the role of training warriors in the Fianna bands, such as Liath Luachra, one of the women who trained the hero Fionn mac Cumhaill. Zoomorphism is an important feature. Badb Catha, for instance, is \"the Raven of Battle\", and in the Táin Bó Cúailnge, The Morrígan shapeshifts into an eel, a wolf, and a cow.",
"title": "Figures"
},
{
"paragraph_id": 12,
"text": "Irish gods are divided into four main groups. Group one encompasses the older gods of Gaul and Britain. The second group is the main focus of much of the mythology and surrounds the native Irish gods with their homes in burial mounds. The third group are the gods that dwell in the sea and the fourth group includes stories of the Otherworld. The gods that appear most often are the Dagda and Lugh. Some scholars have argued that the stories of these gods align with Greek stories and gods.",
"title": "Figures"
},
{
"paragraph_id": 13,
"text": "The Fomorians or Fomori (Old Irish: Fomóire) are a supernatural race, who are often portrayed as hostile and monstrous beings. Originally, they were said to come from under the sea or the earth. Later, they were portrayed as sea raiders, which was probably influenced by the Viking raids on Ireland around that time. Later still they were portrayed as giants. They are enemies of Ireland's first settlers and opponents of the Tuatha Dé Danann, although some members of the two races have offspring. The Fomorians were viewed as the alter-egos to the Túath Dé The Túath Dé defeat the Fomorians in the Battle of Mag Tuired. This has been likened to other Indo-European myths of a war between gods, such as the Æsir and Vanir in Norse mythology and the Olympians and Titans in Greek mythology.",
"title": "Figures"
},
{
"paragraph_id": 14,
"text": "Heroes in Irish mythology can be found in two distinct groups. There is the lawful hero who exists within the boundaries of the community, protecting their people from outsiders. Within the kin-group or túath, heroes are human and gods are not.",
"title": "Figures"
},
{
"paragraph_id": 15,
"text": "The Fianna warrior bands are seen as outsiders, connected with the wilderness, youth, and liminal states. Their leader was called Fionn mac Cumhaill, and the first stories of him are told in fourth century. They are considered aristocrats and outsiders who protect the community from other outsiders; though they may winter with a settled community, they spend the summers living wild, training adolescents and providing a space for war-damaged veterans. The time of vagrancy for these youths is designated as a transition in life post puberty but pre-manhood. Manhood being identified as owning or inheriting property. They live under the authority of their own leaders, or may be somewhat anarchic, and may follow other deities or spirits than the settled communities.",
"title": "Figures"
},
{
"paragraph_id": 16,
"text": "The church refused to recognize this group as an institution and referred to them as \"sons of death\".",
"title": "Figures"
},
{
"paragraph_id": 17,
"text": "The Oilliphéist is a sea-serpent-like monster in Irish mythology and folklore. These monsters were believed to inhabit many lakes and rivers in Ireland and there are legends of saints, especially St. Patrick, and heroes fighting them.",
"title": "Figures"
},
{
"paragraph_id": 18,
"text": "The three main manuscript sources for Irish mythology are the late 11th/early 12th century Lebor na hUidre (Book of the Dun Cow), which is in the library of the Royal Irish Academy, and is the oldest surviving manuscript written entirely in the Irish language; the early 12th-century Book of Leinster, which is in the Library of Trinity College Dublin; and Bodleian Library, MS Rawlinson B 502 (Rawl.), which is in the Bodleian Library at the University of Oxford. Despite the dates of these sources, most of the material they contain predates their composition.",
"title": "Sources"
},
{
"paragraph_id": 19,
"text": "Other important sources include a group of manuscripts that originated in the West of Ireland in the late 14th century or the early 15th century: The Yellow Book of Lecan, The Great Book of Lecan and The Book of Ballymote. The first of these is in the Library of Trinity College and the others are in the Royal Irish Academy. The Yellow Book of Lecan is composed of sixteen parts and includes the legends of Fionn Mac Cumhail, selections of legends of Irish Saints, and the earliest known version of the Táin Bó Cúailnge (\"The Cattle Raid of Cooley\"). This is one of Europe's oldest epics written in a vernacular language. Other 15th-century manuscripts, such as The Book of Fermoy, also contain interesting materials, as do such later syncretic works such as Geoffrey Keating's Foras Feasa ar Éirinn (The History of Ireland) (c. 1640). These later compilers and writers may well have had access to manuscript sources that have since disappeared.",
"title": "Sources"
},
{
"paragraph_id": 20,
"text": "Most of these manuscripts were created by Christian monks, who may well have been torn between a desire to record their native culture and hostility to pagan beliefs, resulting in some of the gods being euhemerised. Many of the later sources may also have formed parts of a propaganda effort designed to create a history for the people of Ireland that could bear comparison with the mythological descent of their British invaders from the founders of Rome, as promulgated by Geoffrey of Monmouth and others. There was also a tendency to rework Irish genealogies to fit them into the schemas of Greek or biblical genealogy.",
"title": "Sources"
},
{
"paragraph_id": 21,
"text": "Whether medieval Irish literature provides reliable evidence of oral tradition remains a matter for debate. Kenneth Jackson described the Ulster Cycle as a \"window on the Iron Age\", and Garret Olmsted has attempted to draw parallels between Táin Bó Cuailnge, the Ulster Cycle epic and the iconography of the Gundestrup Cauldron. However, these \"nativist\" claims have been challenged by \"revisionist\" scholars who believe that much of the literature was created, rather than merely recorded, in Christian times, more or less in imitation of the epics of classical literature that came with Latin learning. The revisionists point to passages apparently influenced by the Iliad in Táin Bó Cuailnge, and to the Togail Troí, an Irish adaptation of Dares Phrygius' De excidio Troiae historia, found in the Book of Leinster. They also argue that the material culture depicted in the stories is generally closer to that of the time of their composition than to that of the distant past.",
"title": "Sources"
},
{
"paragraph_id": 22,
"text": "The Mythological Cycle, comprising stories of the former gods and origins of the Irish, is the least well preserved of the four cycles. It is about the principal people who invaded and inhabited the island. The people include Cessair and her followers, the Formorians, the Partholinians, the Nemedians, the Firbolgs, the Tuatha Dé Danann, and the Milesians. The most important sources are the Metrical Dindshenchas or Lore of Places and the Lebor Gabála Érenn or Book of Invasions. Other manuscripts preserve such mythological tales as The Dream of Aengus, the Wooing Of Étain and Cath Maige Tuireadh, the (second) Battle of Magh Tuireadh. One of the best known of all Irish stories, Oidheadh Clainne Lir, or The Tragedy of the Children of Lir, is also part of this cycle.",
"title": "Mythological Cycle"
},
{
"paragraph_id": 23,
"text": "Lebor Gabála Érenn is a pseudo-history of Ireland, tracing the ancestry of the Irish back to before Noah. It tells of a series of invasions or \"takings\" of Ireland by a succession of peoples, the fifth of whom was the people known as the Túatha Dé Danann (\"Peoples of the Goddess Danu\"), who were believed to have inhabited the island before the arrival of the Gaels, or Milesians. They faced opposition from their enemies, the Fomorians, led by Balor of the Evil Eye. Balor was eventually slain by Lugh Lámfada (Lugh of the Long Arm) at the second battle of Magh Tuireadh. With the arrival of the Gaels, the Túatha Dé Danann retired underground to become the fairy people of later myth and legend.",
"title": "Mythological Cycle"
},
{
"paragraph_id": 24,
"text": "The Metrical Dindshenchas is the great onomastics work of early Ireland, giving the naming legends of significant places in a sequence of poems. It includes a lot of important information on Mythological Cycle figures and stories, including the Battle of Tailtiu, in which the Túatha Dé Danann were defeated by the Milesians.",
"title": "Mythological Cycle"
},
{
"paragraph_id": 25,
"text": "It is important to note that by the Middle Ages the Túatha Dé Danann were not viewed so much as gods as the shape-shifting magician population of an earlier Golden Age Ireland. Texts such as Lebor Gabála Érenn and Cath Maige Tuireadh present them as kings and heroes of the distant past, complete with death-tales. However, there is considerable evidence, both in the texts and from the wider Celtic world, that they were once considered deities.",
"title": "Mythological Cycle"
},
{
"paragraph_id": 26,
"text": "Even after they are displaced as the rulers of Ireland, characters such as Lugh, the Mórrígan, Aengus and Manannán Mac Lir appear in stories set centuries later, betraying their immortality. A poem in the Book of Leinster lists many of the Túatha Dé, but ends \"Although [the author] enumerates them, he does not worship them\". Goibniu, Creidhne and Luchta are referred to as Trí Dé Dána (\"three gods of craftsmanship\"), and the Dagda's name is interpreted in medieval texts as \"the good god\". Nuada is cognate with the British god Nodens; Lugh is a reflex of the pan-Celtic deity Lugus, the name of whom may indicate \"Light\"; Tuireann may be related to the Gaulish Taranis; Ogma to Ogmios; the Badb to Catubodua.",
"title": "Mythological Cycle"
},
{
"paragraph_id": 27,
"text": "The Ulster Cycle is traditionally set around the first century AD, and most of the action takes place in the provinces of Ulster and Connacht. It consists of a group of heroic tales dealing with the lives of Conchobar mac Nessa, king of Ulster, the great hero Cú Chulainn, who was the son of Lug (Lugh), and of their friends, lovers, and enemies. These are the Ulaid, or people of the North-Eastern corner of Ireland and the action of the stories centres round the royal court at Emain Macha (known in English as Navan Fort), close to the modern town of Armagh. The Ulaid had close links with the Irish colony in Scotland, and part of Cú Chulainn's training takes place in that colony.",
"title": "Ulster Cycle"
},
{
"paragraph_id": 28,
"text": "The cycle consists of stories of the births, early lives and training, wooing, battles, feastings, and deaths of the heroes. It also reflects a warrior society in which warfare consists mainly of single combats and wealth is measured mainly in cattle. These stories are written mainly in prose. The centerpiece of the Ulster Cycle is the Táin Bó Cúailnge. Other important Ulster Cycle tales include The Tragic Death of Aife's only Son, Bricriu's Feast, and The Destruction of Da Derga's Hostel. The Exile of the Sons of Usnach, better known as the tragedy of Deirdre and the source of plays by John Millington Synge, William Butler Yeats, and Vincent Woods, is also part of this cycle.",
"title": "Ulster Cycle"
},
{
"paragraph_id": 29,
"text": "This cycle is, in some respects, close to the mythological cycle. Some of the characters from the latter reappear, and the same sort of shape-shifting magic is much in evidence, side by side with a grim, almost callous realism. While we may suspect a few characters, such as Medb or Cú Roí, of once being deities, and Cú Chulainn in particular displays superhuman prowess, the characters are mortal and associated with a specific time and place. If the Mythological Cycle represents a Golden Age, the Ulster Cycle is Ireland's Heroic Age.",
"title": "Ulster Cycle"
},
{
"paragraph_id": 30,
"text": "Like the Ulster Cycle, the Fianna Cycle or Fenian Cycle, also referred to as the Ossianic Cycle, is concerned with the deeds of Irish heroes. The stories of the Cycle appear to be set around the 3rd century and mainly in the provinces of Leinster and Munster. They differ from the other cycles in the strength of their links with the Gaelic-speaking community in Scotland and there are many extant texts from that country. They also differ from the Ulster Cycle in that the stories are told mainly in verse and that in tone they are nearer to the tradition of romance than the tradition of epic. The stories concern the doings of Fionn mac Cumhaill and his band of soldiers, the Fianna.",
"title": "Fianna Cycle"
},
{
"paragraph_id": 31,
"text": "The single most important source for the Fianna Cycle is the Acallam na Senórach (Colloquy of the Old Men), which is found in two 15th century manuscripts, the Book of Lismore and Laud 610, as well as a 17th century manuscript from Killiney, County Dublin. The text is dated from linguistic evidence to the 12th century. The text records conversations between Caílte mac Rónáin and Oisín, the last surviving members of the Fianna, and Saint Patrick, and consists of about 8,000 lines. The late dates of the manuscripts may reflect a longer oral tradition for the Fenian stories.",
"title": "Fianna Cycle"
},
{
"paragraph_id": 32,
"text": "The Fianna of the story are divided into the Clann Baiscne, led by Fionn mac Cumhaill (often rendered as \"Finn MacCool\", Finn Son of Cumhall), and the Clann Morna, led by his enemy, Goll mac Morna. Goll killed Fionn's father, Cumhal, in battle and the boy Fionn was brought up in secrecy. As a youth, while being trained in the art of poetry, he accidentally burned his thumb while cooking the Salmon of Knowledge, which allowed him to suck or bite his thumb to receive bursts of stupendous wisdom. He took his place as the leader of his band and numerous tales are told of their adventures. Two of the greatest of the Irish tales, Tóraigheacht Dhiarmada agus Ghráinne (The Pursuit of Diarmuid and Gráinne) and Oisín in Tír na nÓg form part of the cycle. The Diarmuid and Grainne story, which is one of the cycle's few prose tales, is a probable source of Tristan and Iseult.",
"title": "Fianna Cycle"
},
{
"paragraph_id": 33,
"text": "The world of the Fianna Cycle is one in which professional warriors spend their time hunting, fighting, and engaging in adventures in the spirit world. New entrants into the band are expected to be knowledgeable in poetry as well as undergo a number of physical tests or ordeals. Most of the poems are attributed to being composed by Oisín. This cycle creates a bridge between pre-Christian and Christian times.",
"title": "Fianna Cycle"
},
{
"paragraph_id": 34,
"text": "It was part of the duty of the medieval Irish bards, or court poets, to record the history of the family and the genealogy of the king they served. This they did in poems that blended the mythological and the historical to a greater or lesser degree. The resulting stories from what has come to be known as the Cycle of the Kings, or more correctly Cycles, as there are a number of independent groupings. This term is a more recent addition to the cycles, with it being coined in 1946 by Irish literary critic Myles Dillon.",
"title": "Kings' Cycle"
},
{
"paragraph_id": 35,
"text": "The kings that are included range from the almost entirely mythological Labraid Loingsech, who allegedly became High King of Ireland around 431 BC, to the entirely historical Brian Boru. However, the greatest glory of the Kings' Cycle is the Buile Shuibhne (The Frenzy of Sweeney), a 12th century tale told in verse and prose. Suibhne, king of Dál nAraidi, was cursed by St. Ronan and became a kind of half-man, half bird, condemned to live out his life in the woods, fleeing from his human companions. The story has captured the imaginations of contemporary Irish poets and has been translated by Trevor Joyce and Seamus Heaney.",
"title": "Kings' Cycle"
},
{
"paragraph_id": 36,
"text": "The adventures, or echtrae, are a group of stories of visits to the Irish Other World (which may be westward across the sea, underground, or simply invisible to mortals). The most famous, Oisin in Tir na nÓg belongs to the Fenian Cycle, but several free-standing adventures survive, including The Adventure of Conle, The Voyage of Bran mac Ferbail, and The Adventure of Lóegaire.",
"title": "Other tales"
},
{
"paragraph_id": 37,
"text": "The voyages, or immrama, are tales of sea journeys and the wonders seen on them that may have resulted from the combination of the experiences of fishermen combined and the Other World elements that inform the adventures. Of the seven immrama mentioned in the manuscripts, only three have survived: The Voyage of Máel Dúin, the Voyage of the Uí Chorra, and the Voyage of Snedgus and Mac Riagla. The Voyage of Mael Duin is the forerunner of the later Voyage of St. Brendan. While not as ancient, later 8th century AD works, that influenced European literature, include The Vision of Adamnán.",
"title": "Other tales"
},
{
"paragraph_id": 38,
"text": "Although there are no written sources of Irish mythology, many stories are passed down orally through traditional storytelling. Some of these stories have been lost, but some Celtic regions continue to tell folktales to the modern-day. Folktales and stories were primarily preserved by monastic scribes from the bards of nobility. Once the noble houses started to decline, this tradition was put to an abrupt end. The bards passed the stories to their families, and the families would take on the oral tradition of storytelling.",
"title": "Other tales"
},
{
"paragraph_id": 39,
"text": "During the first few years of the 20th century, Herminie T. Kavanagh wrote down many Irish folk tales, which she published in magazines and in two books. Twenty-six years after her death, the tales from her two books, Darby O'Gill and the Good People and Ashes of Old Wishes, were made into the film Darby O'Gill and the Little People. Noted Irish playwright Lady Gregory also collected folk stories to preserve Irish history. The Irish Folklore Commission gathered folk tales from the general Irish populace from 1935 onward.",
"title": "Other tales"
},
{
"paragraph_id": 40,
"text": "Primary sources in English translation",
"title": "References"
},
{
"paragraph_id": 41,
"text": "Primary sources in Medieval Irish",
"title": "References"
},
{
"paragraph_id": 42,
"text": "Secondary sources",
"title": "References"
}
]
| Irish mythology is the body of myths native to the island of Ireland. It was originally passed down orally in the prehistoric era, being part of ancient Celtic religion. Many myths were later written down in the early medieval era by Christian scribes, who modified and Christianized them to some extent. This body of myths is the largest and best preserved of all the branches of Celtic mythology. The tales and themes continued to be developed over time, and the oral tradition continued in Irish folklore alongside the written tradition, but the main themes and characters remained largely consistent. The myths are conventionally grouped into 'cycles'. The Mythological Cycle consists of tales and poems about the god-like Túatha Dé Danann, who are based on Ireland's pagan deities, and other mythical races like the Fomorians. Important works in the cycle are the Lebor Gabála Érenn, a legendary history of Ireland, the Cath Maige Tuired, and the Aided Chlainne Lir. The Ulster Cycle consists of heroic legends relating to the Ulaid, the most important of which is the epic Táin Bó Cúailnge. The Fianna Cycle focuses on the exploits of the mythical hero Finn and his warrior band the Fianna, including the lengthy Acallam na Senórach. The Kings' Cycle comprises legends about historical and semi-historical kings of Ireland, and tales about the origins of dynasties and peoples. There are also mythical texts that do not fit into any of the cycles; these include the echtrai tales of journeys to the Otherworld, and the Dindsenchas. Some written material has not survived, and many more myths were probably never written down. | 2001-10-27T23:28:39Z | 2024-01-01T00:31:24Z | [
"Template:Multiple issues",
"Template:Reflist",
"Template:ISBN",
"Template:Ireland topics",
"Template:Use British English",
"Template:Citation needed",
"Template:Cite encyclopedia",
"Template:Cite web",
"Template:Commons category",
"Template:Navboxes",
"Template:Short description",
"Template:Use dmy dates",
"Template:Lang",
"Template:Circa",
"Template:Sfn",
"Template:Authority control",
"Template:Celtic mythology",
"Template:Page needed",
"Template:Lang-sga",
"Template:Request quotation",
"Template:Main",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Irish_mythology |
15,176 | Insurance | Insurance is a means of protection from financial loss in which, in exchange for a fee, a party agrees to compensate another party in the event of a certain loss, damage, or injury. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss.
An entity which provides insurance is known as an insurer, insurance company, insurance carrier, or underwriter. A person or entity who buys insurance is known as a policyholder, while a person or entity covered under the policy is called an insured. The insurance transaction involves the policyholder assuming a guaranteed, known, and relatively small loss in the form of a payment to the insurer (a premium) in exchange for the insurer's promise to compensate the insured in the event of a covered loss. The loss may or may not be financial, but it must be reducible to financial terms. Furthermore, it usually involves something in which the insured has an insurable interest established by ownership, possession, or pre-existing relationship.
The insured receives a contract, called the insurance policy, which details the conditions and circumstances under which the insurer will compensate the insured, or their designated beneficiary or assignee. The amount of money charged by the insurer to the policyholder for the coverage set forth in the insurance policy is called the premium. If the insured experiences a loss which is potentially covered by the insurance policy, the insured submits a claim to the insurer for processing by a claims adjuster. A mandatory out-of-pocket expense required by an insurance policy before an insurer will pay a claim is called a deductible (or if required by a health insurance policy, a copayment). The insurer may hedge its own risk by taking out reinsurance, whereby another insurance company agrees to carry some of the risks, especially if the primary insurer deems the risk too large for it to carry.
Methods for transferring or distributing risk were practiced by Chinese and Indian traders as long ago as the 3rd and 2nd millennia BC, respectively. Chinese merchants travelling treacherous river rapids would redistribute their wares across many vessels to limit the loss due to any single vessel capsizing.
Codex Hammurabi Law 238 (c. 1755–1750 BC) stipulated that a sea captain, ship-manager, or ship charterer that saved a ship from total loss was only required to pay one-half the value of the ship to the ship-owner. In the Digesta seu Pandectae (533), the second volume of the codification of laws ordered by Justinian I (527–565), a legal opinion written by the Roman jurist Paulus in 235 AD was included about the Lex Rhodia ("Rhodian law"). It articulates the general average principle of marine insurance established on the island of Rhodes in approximately 1000 to 800 BC, plausibly by the Phoenicians during the proposed Dorian invasion and emergence of the purported Sea Peoples during the Greek Dark Ages (c. 1100–c. 750).
The law of general average is the fundamental principle that underlies all insurance. In 1816, an archeological excavation in Minya, Egypt produced a Nerva–Antonine dynasty-era tablet from the ruins of the Temple of Antinous in Antinoöpolis, Aegyptus. The tablet prescribed the rules and membership dues of a burial society collegium established in Lanuvium, Italia in approximately 133 AD during the reign of Hadrian (117–138) of the Roman Empire. In 1851 AD, future U.S. Supreme Court Associate Justice Joseph P. Bradley (1870–1892 AD), once employed as an actuary for the Mutual Benefit Life Insurance Company, submitted an article to the Journal of the Institute of Actuaries. His article detailed an historical account of a Severan dynasty-era life table compiled by the Roman jurist Ulpian in approximately 220 AD that was also included in the Digesta.
Concepts of insurance has been also found in 3rd century BC Hindu scriptures such as Dharmasastra, Arthashastra and Manusmriti. The ancient Greeks had marine loans. Money was advanced on a ship or cargo, to be repaid with large interest if the voyage prospers. However, the money would not be repaid at all if the ship were lost, thus making the rate of interest high enough to pay for not only for the use of the capital but also for the risk of losing it (fully described by Demosthenes). Loans of this character have ever since been common in maritime lands under the name of bottomry and respondentia bonds.
The direct insurance of sea-risks for a premium paid independently of loans began in Belgium about 1300 AD.
Separate insurance contracts (i.e., insurance policies not bundled with loans or other kinds of contracts) were invented in Genoa in the 14th century, as were insurance pools backed by pledges of landed estates. The first known insurance contract dates from Genoa in 1347. In the next century, maritime insurance developed widely, and premiums were varied with risks. These new insurance contracts allowed insurance to be separated from investment, a separation of roles that first proved useful in marine insurance.
The earliest known policy of life insurance was made in the Royal Exchange, London, on the 18th of June 1583, for £383, 6s. 8d. for twelve months on the life of William Gibbons.
Insurance became far more sophisticated in Enlightenment-era Europe, where specialized varieties developed.
Property insurance as we know it today can be traced to the Great Fire of London, which in 1666 devoured more than 13,000 houses. The devastating effects of the fire converted the development of insurance "from a matter of convenience into one of urgency, a change of opinion reflected in Sir Christopher Wren's inclusion of a site for "the Insurance Office" in his new plan for London in 1667." A number of attempted fire insurance schemes came to nothing, but in 1681, economist Nicholas Barbon and eleven associates established the first fire insurance company, the "Insurance Office for Houses", at the back of the Royal Exchange to insure brick and frame homes. Initially, 5,000 homes were insured by his Insurance Office.
At the same time, the first insurance schemes for the underwriting of business ventures became available. By the end of the seventeenth century, London's growth as a centre for trade was increasing due to the demand for marine insurance. In the late 1680s, Edward Lloyd opened a coffee house, which became the meeting place for parties in the shipping industry wishing to insure cargoes and ships, including those willing to underwrite such ventures. These informal beginnings led to the establishment of the insurance market Lloyd's of London and several related shipping and insurance businesses.
Life insurance policies were taken out in the early 18th century. The first company to offer life insurance was the Amicable Society for a Perpetual Assurance Office, founded in London in 1706 by William Talbot and Sir Thomas Allen. Upon the same principle, Edward Rowe Mores established the Society for Equitable Assurances on Lives and Survivorship in 1762.
It was the world's first mutual insurer and it pioneered age based premiums based on mortality rate laying "the framework for scientific insurance practice and development" and "the basis of modern life assurance upon which all life assurance schemes were subsequently based."
In the late 19th century "accident insurance" began to become available. The first company to offer accident insurance was the Railway Passengers Assurance Company, formed in 1848 in England to insure against the rising number of fatalities on the nascent railway system.
The first international insurance rule was the York Antwerp Rules (YAR) for the distribution of costs between ship and cargo in the event of general average. In 1873 the "Association for the Reform and Codification of the Law of Nations", the forerunner of the International Law Association (ILA), was founded in Brussels. It published the first YAR in 1890, before switching to the present title of the "International Law Association" in 1895.
By the late 19th century governments began to initiate national insurance programs against sickness and old age. Germany built on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s. In the 1880s Chancellor Otto von Bismarck introduced old age pensions, accident insurance and medical care that formed the basis for Germany's welfare state. In Britain more extensive legislation was introduced by the Liberal government in the 1911 National Insurance Act. This gave the British working classes the first contributory system of insurance against illness and unemployment. This system was greatly expanded after the Second World War under the influence of the Beveridge Report, to form the first modern welfare state.
In 2008, the International Network of Insurance Associations (INIA), then an informal network, became active and it has been succeeded by the Global Federation of Insurance Associations (GFIA), which was formally founded in 2012 to aim to increase insurance industry effectiveness in providing input to international regulatory bodies and to contribute more effectively to the international dialogue on issues of common interest. It consists of its 40 member associations and 1 observer association in 67 countries, which companies account for around 89% of total insurance premiums worldwide.
Insurance involves pooling funds from many insured entities (known as exposures) to pay for the losses that only some insureds may incur. The insured entities are therefore protected from risk for a fee, with the fee being dependent upon the frequency and severity of the event occurring. In order to be an insurable risk, the risk insured against must meet certain characteristics. Insurance as a financial intermediary is a commercial enterprise and a major part of the financial services industry, but individual entities can also self-insure through saving money for possible future losses.
Risk which can be insured by private companies typically share seven common characteristics:
When a company insures an individual entity, there are basic legal requirements and regulations. Several commonly cited legal principles of insurance include:
To "indemnify" means to make whole again, or to be reinstated to the position that one was in, to the extent possible, prior to the happening of a specified event or peril. Accordingly, life insurance is generally not considered to be indemnity insurance, but rather "contingent" insurance (i.e., a claim arises on the occurrence of a specified event). There are generally three types of insurance contracts that seek to indemnify an insured:
From an insured's standpoint, the result is usually the same: the insurer pays the loss and claims expenses.
If the Insured has a "reimbursement" policy, the insured can be required to pay for a loss and then be "reimbursed" by the insurance carrier for the loss and out of pocket costs including, with the permission of the insurer, claim expenses.
Under a "pay on behalf" policy, the insurance carrier would defend and pay a claim on behalf of the insured who would not be out of pocket for anything. Most modern liability insurance is written on the basis of "pay on behalf" language, which enables the insurance carrier to manage and control the claim.
Under an "indemnification" policy, the insurance carrier can generally either "reimburse" or "pay on behalf of", whichever is more beneficial to it and the insured in the claim handling process.
An entity seeking to transfer risk (an individual, corporation, or association of any type, etc.) becomes the "insured" party once risk is assumed by an "insurer", the insuring party, by means of a contract, called an insurance policy. Generally, an insurance contract includes, at a minimum, the following elements: identification of participating parties (the insurer, the insured, the beneficiaries), the premium, the period of coverage, the particular loss event covered, the amount of coverage (i.e., the amount to be paid to the insured or beneficiary in the event of a loss), and exclusions (events not covered). An insured is thus said to be "indemnified" against the loss covered in the policy.
When insured parties experience a loss for a specified peril, the coverage entitles the policyholder to make a claim against the insurer for the covered amount of loss as specified by the policy. The fee paid by the insured to the insurer for assuming the risk is called the premium. Insurance premiums from many insureds are used to fund accounts reserved for later payment of claims – in theory for a relatively few claimants – and for overhead costs. So long as an insurer maintains adequate funds set aside for anticipated losses (called reserves), the remaining margin is an insurer's profit.
Policies typically include a number of exclusions, for example:
Insurers may prohibit certain activities which are considered dangerous and therefore excluded from coverage. One system for classifying activities according to whether they are authorised by insurers refers to "green light" approved activities and events, "yellow light" activities and events which require insurer consultation and/or waivers of liability, and "red light" activities and events which are prohibited and outside the scope of insurance cover.
Insurance can have various effects on society through the way that it changes who bears the cost of losses and damage. On one hand it can increase fraud; on the other it can help societies and individuals prepare for catastrophes and mitigate the effects of catastrophes on both households and societies.
Insurance can influence the probability of losses through moral hazard, insurance fraud, and preventive steps by the insurance company. Insurance scholars have typically used moral hazard to refer to the increased loss due to unintentional carelessness and insurance fraud to refer to increased risk due to intentional carelessness or indifference. Insurers attempt to address carelessness through inspections, policy provisions requiring certain types of maintenance, and possible discounts for loss mitigation efforts. While in theory insurers could encourage investment in loss reduction, some commentators have argued that in practice insurers had historically not aggressively pursued loss control measures—particularly to prevent disaster losses such as hurricanes—because of concerns over rate reductions and legal battles. However, since about 1996 insurers have begun to take a more active role in loss mitigation, such as through building codes.
According to the study books of The Chartered Insurance Institute, there are variant methods of insurance as follows:
Insurers may use the subscription business model, collecting premium payments periodically in return for on-going and/or compounding benefits offered to policyholders.
Insurers' business model aims to collect more in premium and investment income than is paid out in losses, and to also offer a competitive price which consumers will accept. Profit can be reduced to a simple equation:
Insurers make money in two ways:
The most complicated aspect of insuring is the actuarial science of ratemaking (price-setting) of policies, which uses statistics and probability to approximate the rate of future claims based on a given risk. After producing rates, the insurer will use discretion to reject or accept risks through the underwriting process.
At the most basic level, initial rate-making involves looking at the frequency and severity of insured perils and the expected average payout resulting from these perils. Thereafter an insurance company will collect historical loss-data, bring the loss data to present value, and compare these prior losses to the premium collected in order to assess rate adequacy. Loss ratios and expense loads are also used. Rating for different risk characteristics involves—at the most basic level—comparing the losses with "loss relativities"—a policy with twice as many losses would, therefore, be charged twice as much. More complex multivariate analyses are sometimes used when multiple characteristics are involved and a univariate analysis could produce confounded results. Other statistical methods may be used in assessing the probability of future losses.
Upon termination of a given policy, the amount of premium collected minus the amount paid out in claims is the insurer's underwriting profit on that policy. Underwriting performance is measured by something called the "combined ratio", which is the ratio of expenses/losses to premiums. A combined ratio of less than 100% indicates an underwriting profit, while anything over 100 indicates an underwriting loss. A company with a combined ratio over 100% may nevertheless remain profitable due to investment earnings.
Insurance companies earn investment profits on "float". Float, or available reserve, is the amount of money on hand at any given moment that an insurer has collected in insurance premiums but has not paid out in claims. Insurers start investing insurance premiums as soon as they are collected and continue to earn interest or other income on them until claims are paid out. The Association of British Insurers (grouping together 400 insurance companies and 94% of UK insurance services) has almost 20% of the investments in the London Stock Exchange. In 2007, U.S. industry profits from float totaled $58 billion. In a 2009 letter to investors, Warren Buffett wrote, "we were paid $2.8 billion to hold our float in 2008".
In the United States, the underwriting loss of property and casualty insurance companies was $142.3 billion in the five years ending 2003. But overall profit for the same period was $68.4 billion, as the result of float. Some insurance-industry insiders, most notably Hank Greenberg, do not believe that it is possible to sustain a profit from float forever without an underwriting profit as well, but this opinion is not universally held. Reliance on float for profit has led some industry experts to call insurance companies "investment companies that raise the money for their investments by selling insurance".
Naturally, the float method is difficult to carry out in an economically depressed period. Bear markets do cause insurers to shift away from investments and to toughen up their underwriting standards, so a poor economy generally means high insurance-premiums. This tendency to swing between profitable and unprofitable periods over time is commonly known as the underwriting, or insurance, cycle.
Claims and loss handling is the materialized utility of insurance; it is the actual "product" paid for. Claims may be filed by insureds directly with the insurer or through brokers or agents. The insurer may require that the claim be filed on its own proprietary forms, or may accept claims on a standard industry form, such as those produced by ACORD.
Insurance-company claims departments employ a large number of claims adjusters, supported by a staff of records-management and data-entry clerks. Incoming claims are classified based on severity and are assigned to adjusters, whose settlement authority varies with their knowledge and experience. An adjuster undertakes an investigation of each claim, usually in close cooperation with the insured, determines if coverage is available under the terms of the insurance contract (and if so, the reasonable monetary value of the claim), and authorizes payment.
Policyholders may hire their own public adjusters to negotiate settlements with the insurance company on their behalf. For policies that are complicated, where claims may be complex, the insured may take out a separate insurance-policy add-on, called loss-recovery insurance, which covers the cost of a public adjuster in the case of a claim.
Adjusting liability-insurance claims is particularly difficult because they involve a third party, the plaintiff, who is under no contractual obligation to cooperate with the insurer and may in fact regard the insurer as a deep pocket. The adjuster must obtain legal counsel for the insured—either inside ("house") counsel or outside ("panel") counsel, monitor litigation that may take years to complete, and appear in person or over the telephone with settlement authority at a mandatory settlement-conference when requested by a judge.
If a claims adjuster suspects under-insurance, the condition of average may come into play to limit the insurance company's exposure.
In managing the claims-handling function, insurers seek to balance the elements of customer satisfaction, administrative handling expenses, and claims overpayment leakages. In addition to this balancing act, fraudulent insurance practices are a major business risk that insurers must manage and overcome. Disputes between insurers and insureds over the validity of claims or claims-handling practices occasionally escalate into litigation (see insurance bad faith).
Insurers will often use insurance agents to initially market or underwrite their customers. Agents can be captive, meaning they write only for one company, or independent, meaning that they can issue policies from several companies. The existence and success of companies using insurance agents is likely due to the availability of improved and personalised services. Companies also use Broking firms, Banks and other corporate entities (like Self Help Groups, Microfinance Institutions, NGOs, etc.) to market their products.
Any risk that can be quantified can potentially be insured. Specific kinds of risk that may give rise to claims are known as perils. An insurance policy will set out in detail which perils are covered by the policy and which are not. Below are non-exhaustive lists of the many different types of insurance that exist. A single policy may cover risks in one or more of the categories set out below. For example, vehicle insurance would typically cover both the property risk (theft or damage to the vehicle) and the liability risk (legal claims arising from an accident). A home insurance policy in the United States typically includes coverage for damage to the home and the owner's belongings, certain legal claims against the owner, and even a small amount of coverage for medical expenses of guests who are injured on the owner's property.
Business insurance can take a number of different forms, such as the various kinds of professional liability insurance, also called professional indemnity (PI), which are discussed below under that name; and the business owner's policy (BOP), which packages into one policy many of the kinds of coverage that a business owner needs, in a way analogous to how homeowners' insurance packages the coverages that a homeowner needs.
Vehicle insurance protects the policyholder against financial loss in the event of an incident involving a vehicle they own, such as in a traffic collision.
Coverage typically includes:
Gap insurance covers the excess amount on an auto loan in an instance where the policyholder's insurance company does not cover the entire loan. Depending on the company's specific policies it might or might not cover the deductible as well. This coverage is marketed for those who put low down payments, have high interest rates on their loans, and those with 60-month or longer terms. Gap insurance is typically offered by a finance company when the vehicle owner purchases their vehicle, but many auto insurance companies offer this coverage to consumers as well.
Health insurance policies cover the cost of medical treatments. Dental insurance, like medical insurance, protects policyholders for dental costs. In most developed countries, all citizens receive some health coverage from their governments, paid through taxation. In most countries, health insurance is often part of an employer's benefits.
Casualty insurance insures against accidents, not necessarily tied to any specific property. It is a broad spectrum of insurance that a number of other types of insurance could be classified, such as auto, workers compensation, and some liability insurances.
Life insurance provides a monetary benefit to a decedent's family or other designated beneficiary, and may specifically provide for income to an insured person's family, burial, funeral and other final expenses. Life insurance policies often allow the option of having the proceeds paid to the beneficiary either in a lump sum cash payment or an annuity. In most states, a person cannot purchase a policy on another person without their knowledge.
Annuities provide a stream of payments and are generally classified as insurance because they are issued by insurance companies, are regulated as insurance, and require the same kinds of actuarial and investment management expertise that life insurance requires. Annuities and pensions that pay a benefit for life are sometimes regarded as insurance against the possibility that a retiree will outlive his or her financial resources. In that sense, they are the complement of life insurance and, from an underwriting perspective, are the mirror image of life insurance.
Certain life insurance contracts accumulate cash values, which may be taken by the insured if the policy is surrendered or which may be borrowed against. Some policies, such as annuities and endowment policies, are financial instruments to accumulate or liquidate wealth when it is needed.
In many countries, such as the United States and the UK, the tax law provides that the interest on this cash value is not taxable under certain circumstances. This leads to widespread use of life insurance as a tax-efficient method of saving as well as protection in the event of early death.
In the United States, the tax on interest income on life insurance policies and annuities is generally deferred. However, in some cases the benefit derived from tax deferral may be offset by a low return. This depends upon the insuring company, the type of policy and other variables (mortality, market return, etc.). Moreover, other income tax saving vehicles (e.g., IRAs, 401(k) plans, Roth IRAs) may be better alternatives for value accumulation.
Burial insurance is an old type of life insurance which is paid out upon death to cover final expenses, such as the cost of a funeral. The Greeks and Romans introduced burial insurance c. 600 CE when they organized guilds called "benevolent societies" which cared for the surviving families and paid funeral expenses of members upon death. Guilds in the Middle Ages served a similar purpose, as did friendly societies during Victorian times.
Property insurance provides protection against risks to property, such as fire, theft or weather damage. This may include specialized forms of insurance such as fire insurance, flood insurance, earthquake insurance, home insurance, inland marine insurance or boiler insurance. The term property insurance may, like casualty insurance, be used as a broad category of various subtypes of insurance, some of which are listed below:
Liability insurance is a broad superset that covers legal claims against the insured. Many types of insurance include an aspect of liability coverage. For example, a homeowner's insurance policy will normally include liability coverage which protects the insured in the event of a claim brought by someone who slips and falls on the property; automobile insurance also includes an aspect of liability insurance that indemnifies against the harm that a crashing car can cause to others' lives, health, or property. The protection offered by a liability insurance policy is twofold: a legal defense in the event of a lawsuit commenced against the policyholder and indemnification (payment on behalf of the insured) with respect to a settlement or court verdict. Liability policies typically cover only the negligence of the insured, and will not apply to results of wilful or intentional acts by the insured.
Often a commercial insured's liability insurance program consists of several layers. The first layer of insurance generally consists of primary insurance, which provides first dollar indemnity for judgments and settlements up to the limits of liability of the primary policy. Generally, primary insurance is subject to a deductible and obligates the insurer to defend the insured against lawsuits, which is normally accomplished by assigning counsel to defend the insured. In many instances, a commercial insured may elect to self-insure. Above the primary insurance or self-insured retention, the insured may have one or more layers of excess insurance to provide coverage additional limits of indemnity protection. There are a variety of types of excess insurance, including "stand-alone" excess policies (policies that contain their own terms, conditions, and exclusions), "follow form" excess insurance (policies that follow the terms of the underlying policy except as specifically provided), and "umbrella" insurance policies (excess insurance that in some circumstances could provide coverage that is broader than the underlying insurance).
Credit insurance repays some or all of a loan when the borrower is insolvent.
Cyber-insurance is a business lines insurance product intended to provide coverage to corporations from Internet-based risks, and more generally from risks relating to information technology infrastructure, information privacy, information governance liability, and activities related thereto.
Some communities prefer to create virtual insurance among themselves by other means than contractual risk transfer, which assigns explicit numerical values to risk. A number of religious groups, including the Amish and some Muslim groups, depend on support provided by their communities when disasters strike. The risk presented by any given person is assumed collectively by the community who all bear the cost of rebuilding lost property and supporting people whose needs are suddenly greater after a loss of some kind. In supportive communities where others can be trusted to follow community leaders, this tacit form of insurance can work. In this manner the community can even out the extreme differences in insurability that exist among its members. Some further justification is also provided by invoking the moral hazard of explicit insurance contracts.
In the United Kingdom, The Crown (which, for practical purposes, meant the civil service) did not insure property such as government buildings. If a government building was damaged, the cost of repair would be met from public funds because, in the long run, this was cheaper than paying insurance premiums. Since many UK government buildings have been sold to property companies and rented back, this arrangement is now less common.
In the United States, the most prevalent form of self-insurance is governmental risk management pools. They are self-funded cooperatives, operating as carriers of coverage for the majority of governmental entities today, such as county governments, municipalities, and school districts. Rather than these entities independently self-insure and risk bankruptcy from a large judgment or catastrophic loss, such governmental entities form a risk pool. Such pools begin their operations by capitalization through member deposits or bond issuance. Coverage (such as general liability, auto liability, professional liability, workers compensation, and property) is offered by the pool to its members, similar to coverage offered by insurance companies. However, self-insured pools offer members lower rates (due to not needing insurance brokers), increased benefits (such as loss prevention services) and subject matter expertise. Of approximately 91,000 distinct governmental entities operating in the United States, 75,000 are members of self-insured pools in various lines of coverage, forming approximately 500 pools. Although a relatively small corner of the insurance market, the annual contributions (self-insured premiums) to such pools have been estimated up to 17 billion dollars annually.
Insurance companies may provide any combination of insurance types, but are often classified into three groups:
General insurance companies can be further divided into these sub categories.
In most countries, life and non-life insurers are subject to different regulatory regimes and different tax and accounting rules. The main reason for the distinction between the two types of company is that life, annuity, and pension business is long-term in nature – coverage for life assurance or a pension can cover risks over many decades. By contrast, non-life insurance cover usually covers a shorter period, such as one year.
Insurance companies are commonly classified as either mutual or proprietary companies. Mutual companies are owned by the policyholders, while shareholders (who may or may not own policies) own proprietary insurance companies.
Demutualization of mutual insurers to form stock companies, as well as the formation of a hybrid known as a mutual holding company, became common in some countries, such as the United States, in the late 20th century. However, not all states permit mutual holding companies.
Reinsurance companies are insurance companies that provide policies to other insurance companies, allowing them to reduce their risks and protect themselves from substantial losses. The reinsurance market is dominated by a few large companies with huge reserves. A reinsurer may also be a direct writer of insurance risks as well.
Captive insurance companies can be defined as limited-purpose insurance companies established with the specific objective of financing risks emanating from their parent group or groups. This definition can sometimes be extended to include some of the risks of the parent company's customers. In short, it is an in-house self-insurance vehicle. Captives may take the form of a "pure" entity, which is a 100% subsidiary of the self-insured parent company; of a "mutual" captive, which insures the collective risks of members of an industry; and of an "association" captive, which self-insures individual risks of the members of a professional, commercial or industrial association. Captives represent commercial, economic and tax advantages to their sponsors because of the reductions in costs they help create and for the ease of insurance risk management and the flexibility for cash flows they generate. Additionally, they may provide coverage of risks which is neither available nor offered in the traditional insurance market at reasonable prices.
The types of risk that a captive can underwrite for their parents include property damage, public and product liability, professional indemnity, employee benefits, employers' liability, motor and medical aid expenses. The captive's exposure to such risks may be limited by the use of reinsurance.
Captives are becoming an increasingly important component of the risk management and risk financing strategy of their parent. This can be understood against the following background:
Other possible forms for an insurance company include reciprocals, in which policyholders reciprocate in sharing risks, and Lloyd's organizations.
Admitted insurance companies are those in the United States that have been admitted or licensed by the state licensing agency. The insurance they provide is called admitted insurance. Non-admitted companies have not been approved by the state licensing agency, but are allowed to provide insurance under special circumstances when they meet an insurance need that admitted companies cannot or will not meet.
There are also companies known as "insurance consultants". Like a mortgage broker, these companies are paid a fee by the customer to shop around for the best insurance policy among many companies. Similar to an insurance consultant, an "insurance broker" also shops around for the best insurance policy among many companies. However, with insurance brokers, the fee is usually paid in the form of commission from the insurer that is selected rather than directly from the client.
Neither insurance consultants nor insurance brokers are insurance companies and no risks are transferred to them in insurance transactions. Third party administrators are companies that perform underwriting and sometimes claims handling services for insurance companies. These companies often have special expertise that the insurance companies do not have.
The financial stability and strength of an insurance company is a consideration when buying an insurance contract. An insurance premium paid currently provides coverage for losses that might arise many years in the future. For that reason, a more financially stable insurance carrier reduces the risk of the insurance company becoming insolvent, leaving their policyholders with no coverage (or coverage only from a government-backed insurance pool or other arrangements with less attractive payouts for losses). A number of independent rating agencies provide information and rate the financial viability of insurance companies.
Insurance companies are rated by various agencies such as AM Best. The ratings include the company's financial strength, which measures its ability to pay claims. It also rates financial instruments issued by the insurance company, such as bonds, notes, and securitization products.
Advanced economies account for the bulk of the global insurance industry. According to Swiss Re, the global insurance market wrote $6.287 trillion in direct premiums in 2020. ("Direct premiums" means premiums written directly by insurers before accounting for ceding of risk to reinsurers.) As usual, the United States was the country with the largest insurance market with $2.530 trillion (40.3%) of direct premiums written, with the People's Republic of China coming in second at only $574 billion (9.3%), Japan coming in third at $438 billion (7.1%), and the United Kingdom coming in fourth at $380 billion (6.2%). However, the European Union's single market is the actual second largest market, with 18 percent market share.
In the United States, insurance is regulated by the states under the McCarran–Ferguson Act, with "periodic proposals for federal intervention", and a nonprofit coalition of state insurance agencies called the National Association of Insurance Commissioners works to harmonize the country's different laws and regulations. The National Conference of Insurance Legislators (NCOIL) also works to harmonize the different state laws.
In the European Union, the Third Non-Life Directive and the Third Life Directive, both passed in 1992 and effective 1994, created a single insurance market in Europe and allowed insurance companies to offer insurance anywhere in the EU (subject to permission from authority in the head office) and allowed insurance consumers to purchase insurance from any insurer in the EU. As far as insurance in the United Kingdom, the Financial Services Authority took over insurance regulation from the General Insurance Standards Council in 2005; laws passed include the Insurance Companies Act 1973 and another in 1982, and reforms to warranty and other aspects under discussion as of 2012.
The insurance industry in China was nationalized in 1949 and thereafter offered by only a single state-owned company, the People's Insurance Company of China, which was eventually suspended as demand declined in a communist environment. In 1978, market reforms led to an increase in the market and by 1995 a comprehensive Insurance Law of the People's Republic of China was passed, followed in 1998 by the formation of China Insurance Regulatory Commission (CIRC), which has broad regulatory authority over the insurance market of China.
In India IRDA is insurance regulatory authority. As per the section 4 of IRDA Act 1999, Insurance Regulatory and Development Authority (IRDA), which was constituted by an act of parliament. National Insurance Academy, Pune is apex insurance capacity builder institute promoted with support from Ministry of Finance and by LIC, Life & General Insurance companies.
In 2017, within the framework of the joint project of the Bank of Russia and Yandex, a special check mark (a green circle with a tick and 'Реестр ЦБ РФ' (Unified state register of insurance entities) text box) appeared in the search for Yandex system, informing the consumer that the company's financial services are offered on the marked website, which has the status of an insurance company, a broker or a mutual insurance association.
Insurance is just a risk transfer mechanism wherein the financial burden which may arise due to some fortuitous event is transferred to a bigger entity (i.e., an insurance company) by way of paying premiums. This only reduces the financial burden and not the actual chances of happening of an event. Insurance is a risk for both the insurance company and the insured. The insurance company understands the risk involved and will perform a risk assessment when writing the policy.
As a result, the premiums may go up if they determine that the policyholder will file a claim. However, premiums might reduce if the policyholder commits to a risk management program as recommended by the insurer. It is therefore important that insurers view risk management as a joint initiative between policyholder and insurer since a robust risk management plan minimizes the possibility of a large claim for the insurer while stabilizing or reducing premiums for the policyholder.
If a person is financially stable and plans for life's unexpected events, they may be able to go without insurance. However, they must have enough to cover a total and complete loss of employment and of their possessions. Some states will accept a surety bond, a government bond, or even making a cash deposit with the state.
An insurance company may inadvertently find that its insureds may not be as risk-averse as they might otherwise be (since, by definition, the insured has transferred the risk to the insurer), a concept known as moral hazard. This 'insulates' many from the true costs of living with risk, negating measures that can mitigate or adapt to risk and leading some to describe insurance schemes as potentially maladaptive.
Insurance policies can be complex and some policyholders may not understand all the fees and coverages included in a policy. As a result, people may buy policies on unfavorable terms. In response to these issues, many countries have enacted detailed statutory and regulatory regimes governing every aspect of the insurance business, including minimum standards for policies and the ways in which they may be advertised and sold.
For example, most insurance policies in the English language today have been carefully drafted in plain English; the industry learned the hard way that many courts will not enforce policies against insureds when the judges themselves cannot understand what the policies are saying. Typically, courts construe ambiguities in insurance policies against the insurance company and in favor of coverage under the policy.
Many institutional insurance purchasers buy insurance through an insurance broker. While on the surface it appears the broker represents the buyer (not the insurance company), and typically counsels the buyer on appropriate coverage and policy limitations, in the vast majority of cases a broker's compensation comes in the form of a commission as a percentage of the insurance premium, creating a conflict of interest in that the broker's financial interest is tilted toward encouraging an insured to purchase more insurance than might be necessary at a higher price. A broker generally holds contracts with many insurers, thereby allowing the broker to "shop" the market for the best rates and coverage possible.
Insurance may also be purchased through an agent. A tied agent, working exclusively with one insurer, represents the insurance company from whom the policyholder buys (while a free agent sells policies of various insurance companies). Just as there is a potential conflict of interest with a broker, an agent has a different type of conflict. Because agents work directly for the insurance company, if there is a claim the agent may advise the client to the benefit of the insurance company. Agents generally cannot offer as broad a range of selection compared to an insurance broker.
An independent insurance consultant advises insureds on a fee-for-service retainer, similar to an attorney, and thus offers completely independent advice, free of the financial conflict of interest of brokers or agents. However, such a consultant must still work through brokers or agents in order to secure coverage for their clients.
In the United States, economists and consumer advocates generally consider insurance to be worthwhile for low-probability, catastrophic losses, but not for high-probability, small losses. Because of this, consumers are advised to select high deductibles and to not insure losses which would not cause a disruption in their life. However, consumers have shown a tendency to prefer low deductibles and to prefer to insure relatively high-probability, small losses over low-probability, perhaps due to not understanding or ignoring the low-probability risk. This is associated with reduced purchasing of insurance against low-probability losses, and may result in increased inefficiencies from moral hazard.
Redlining is the practice of denying insurance coverage in specific geographic areas, supposedly because of a high likelihood of loss, while the alleged motivation is unlawful discrimination. Racial profiling or redlining has a long history in the property insurance industry in the United States. From a review of industry underwriting and marketing materials, court documents, and research by government agencies, industry and community groups, and academics, it is clear that race has long affected and continues to affect the policies and practices of the insurance industry.
In July 2007, the US Federal Trade Commission (FTC) released a report presenting the results of a study concerning credit-based insurance scores in automobile insurance. The study found that these scores are effective predictors of risk. It also showed that African-Americans and Hispanics are substantially overrepresented in the lowest credit scores, and substantially underrepresented in the highest, while Caucasians and Asians are more evenly spread across the scores. The credit scores were also found to predict risk within each of the ethnic groups, leading the FTC to conclude that the scoring models are not solely proxies for redlining. The FTC indicated little data was available to evaluate benefit of insurance scores to consumers. The report was disputed by representatives of the Consumer Federation of America, the National Fair Housing Alliance, the National Consumer Law Center, and the Center for Economic Justice, for relying on data provided by the insurance industry.
All states have provisions in their rate regulation laws or in their fair trade practice acts that prohibit unfair discrimination, often called redlining, in setting rates and making insurance available.
In determining premiums and premium rate structures, insurers consider quantifiable factors, including location, credit scores, gender, occupation, marital status, and education level. However, the use of such factors is often considered to be unfair or unlawfully discriminatory, and the reaction against this practice has in some instances led to political disputes about the ways in which insurers determine premiums and regulatory intervention to limit the factors used.
An insurance underwriter's job is to evaluate a given risk as to the likelihood that a loss will occur. Any factor that causes a greater likelihood of loss should theoretically be charged a higher rate. This basic principle of insurance must be followed if insurance companies are to remain solvent. Thus, "discrimination" against (i.e., negative differential treatment of) potential insureds in the risk evaluation and premium-setting process is a necessary by-product of the fundamentals of insurance underwriting. For instance, insurers charge older people significantly higher premiums than they charge younger people for term life insurance. Older people are thus treated differently from younger people (i.e., a distinction is made, discrimination occurs). The rationale for the differential treatment goes to the heart of the risk a life insurer takes: older people are likely to die sooner than young people, so the risk of loss (the insured's death) is greater in any given period of time and therefore the risk premium must be higher to cover the greater risk. However, treating insureds differently when there is no actuarially sound reason for doing so is unlawful discrimination.
New assurance products can now be protected from copying with a business method patent in the United States.
A recent example of a new insurance product that is patented is Usage Based auto insurance. Early versions were independently invented and patented by a major US auto insurance company, Progressive Auto Insurance (U.S. Patent 5,797,134) and a Spanish independent inventor, Salvador Minguijon Perez.
Many independent inventors are in favor of patenting new insurance products since it gives them protection from big companies when they bring their new insurance products to market. Independent inventors account for 70% of the new U.S. patent applications in this area.
Many insurance executives are opposed to patenting insurance products because it creates a new risk for them. The Hartford insurance company, for example, recently had to pay $80 million to an independent inventor, Bancorp Services, in order to settle a patent infringement and theft of trade secret lawsuit for a type of corporate owned life insurance product invented and patented by Bancorp.
There are currently about 150 new patent applications on insurance inventions filed per year in the United States. The rate at which patents have been issued has steadily risen from 15 in 2002 to 44 in 2006.
The first insurance patent to be granted was including another example of an application posted was. It was posted on 6 March 2009. This patent application describes a method for increasing the ease of changing insurance companies.
Insurance on demand (also IoD) is an insurance service that provides clients with insurance protection when they need, i.e. only episodic rather than on 24/7 basis as typically provided by traditional insurers (e.g. clients can purchase an insurance for one single flight rather than a longer-lasting travel insurance plan).
Certain insurance products and practices have been described as rent-seeking by critics. That is, some insurance products or practices are useful primarily because of legal benefits, such as reducing taxes, as opposed to providing protection against risks of adverse events.
Muslim scholars have varying opinions about life insurance. Life insurance policies that earn interest (or guaranteed bonus/NAV) are generally considered to be a form of riba (usury) and some consider even policies that do not earn interest to be a form of gharar (speculation). Some argue that gharar is not present due to the actuarial science behind the underwriting. Jewish rabbinical scholars also have expressed reservations regarding insurance as an avoidance of God's will but most find it acceptable in moderation.
Some Christians believe insurance represents a lack of faith and there is a long history of resistance to commercial insurance in Anabaptist communities (Mennonites, Amish, Hutterites, Brethren in Christ) but many participate in community-based self-insurance programs that spread risk within their communities.
Country-specific articles: | [
{
"paragraph_id": 0,
"text": "Insurance is a means of protection from financial loss in which, in exchange for a fee, a party agrees to compensate another party in the event of a certain loss, damage, or injury. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss.",
"title": ""
},
{
"paragraph_id": 1,
"text": "An entity which provides insurance is known as an insurer, insurance company, insurance carrier, or underwriter. A person or entity who buys insurance is known as a policyholder, while a person or entity covered under the policy is called an insured. The insurance transaction involves the policyholder assuming a guaranteed, known, and relatively small loss in the form of a payment to the insurer (a premium) in exchange for the insurer's promise to compensate the insured in the event of a covered loss. The loss may or may not be financial, but it must be reducible to financial terms. Furthermore, it usually involves something in which the insured has an insurable interest established by ownership, possession, or pre-existing relationship.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The insured receives a contract, called the insurance policy, which details the conditions and circumstances under which the insurer will compensate the insured, or their designated beneficiary or assignee. The amount of money charged by the insurer to the policyholder for the coverage set forth in the insurance policy is called the premium. If the insured experiences a loss which is potentially covered by the insurance policy, the insured submits a claim to the insurer for processing by a claims adjuster. A mandatory out-of-pocket expense required by an insurance policy before an insurer will pay a claim is called a deductible (or if required by a health insurance policy, a copayment). The insurer may hedge its own risk by taking out reinsurance, whereby another insurance company agrees to carry some of the risks, especially if the primary insurer deems the risk too large for it to carry.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Methods for transferring or distributing risk were practiced by Chinese and Indian traders as long ago as the 3rd and 2nd millennia BC, respectively. Chinese merchants travelling treacherous river rapids would redistribute their wares across many vessels to limit the loss due to any single vessel capsizing.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Codex Hammurabi Law 238 (c. 1755–1750 BC) stipulated that a sea captain, ship-manager, or ship charterer that saved a ship from total loss was only required to pay one-half the value of the ship to the ship-owner. In the Digesta seu Pandectae (533), the second volume of the codification of laws ordered by Justinian I (527–565), a legal opinion written by the Roman jurist Paulus in 235 AD was included about the Lex Rhodia (\"Rhodian law\"). It articulates the general average principle of marine insurance established on the island of Rhodes in approximately 1000 to 800 BC, plausibly by the Phoenicians during the proposed Dorian invasion and emergence of the purported Sea Peoples during the Greek Dark Ages (c. 1100–c. 750).",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The law of general average is the fundamental principle that underlies all insurance. In 1816, an archeological excavation in Minya, Egypt produced a Nerva–Antonine dynasty-era tablet from the ruins of the Temple of Antinous in Antinoöpolis, Aegyptus. The tablet prescribed the rules and membership dues of a burial society collegium established in Lanuvium, Italia in approximately 133 AD during the reign of Hadrian (117–138) of the Roman Empire. In 1851 AD, future U.S. Supreme Court Associate Justice Joseph P. Bradley (1870–1892 AD), once employed as an actuary for the Mutual Benefit Life Insurance Company, submitted an article to the Journal of the Institute of Actuaries. His article detailed an historical account of a Severan dynasty-era life table compiled by the Roman jurist Ulpian in approximately 220 AD that was also included in the Digesta.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Concepts of insurance has been also found in 3rd century BC Hindu scriptures such as Dharmasastra, Arthashastra and Manusmriti. The ancient Greeks had marine loans. Money was advanced on a ship or cargo, to be repaid with large interest if the voyage prospers. However, the money would not be repaid at all if the ship were lost, thus making the rate of interest high enough to pay for not only for the use of the capital but also for the risk of losing it (fully described by Demosthenes). Loans of this character have ever since been common in maritime lands under the name of bottomry and respondentia bonds.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The direct insurance of sea-risks for a premium paid independently of loans began in Belgium about 1300 AD.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Separate insurance contracts (i.e., insurance policies not bundled with loans or other kinds of contracts) were invented in Genoa in the 14th century, as were insurance pools backed by pledges of landed estates. The first known insurance contract dates from Genoa in 1347. In the next century, maritime insurance developed widely, and premiums were varied with risks. These new insurance contracts allowed insurance to be separated from investment, a separation of roles that first proved useful in marine insurance.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The earliest known policy of life insurance was made in the Royal Exchange, London, on the 18th of June 1583, for £383, 6s. 8d. for twelve months on the life of William Gibbons.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Insurance became far more sophisticated in Enlightenment-era Europe, where specialized varieties developed.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Property insurance as we know it today can be traced to the Great Fire of London, which in 1666 devoured more than 13,000 houses. The devastating effects of the fire converted the development of insurance \"from a matter of convenience into one of urgency, a change of opinion reflected in Sir Christopher Wren's inclusion of a site for \"the Insurance Office\" in his new plan for London in 1667.\" A number of attempted fire insurance schemes came to nothing, but in 1681, economist Nicholas Barbon and eleven associates established the first fire insurance company, the \"Insurance Office for Houses\", at the back of the Royal Exchange to insure brick and frame homes. Initially, 5,000 homes were insured by his Insurance Office.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "At the same time, the first insurance schemes for the underwriting of business ventures became available. By the end of the seventeenth century, London's growth as a centre for trade was increasing due to the demand for marine insurance. In the late 1680s, Edward Lloyd opened a coffee house, which became the meeting place for parties in the shipping industry wishing to insure cargoes and ships, including those willing to underwrite such ventures. These informal beginnings led to the establishment of the insurance market Lloyd's of London and several related shipping and insurance businesses.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Life insurance policies were taken out in the early 18th century. The first company to offer life insurance was the Amicable Society for a Perpetual Assurance Office, founded in London in 1706 by William Talbot and Sir Thomas Allen. Upon the same principle, Edward Rowe Mores established the Society for Equitable Assurances on Lives and Survivorship in 1762.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "It was the world's first mutual insurer and it pioneered age based premiums based on mortality rate laying \"the framework for scientific insurance practice and development\" and \"the basis of modern life assurance upon which all life assurance schemes were subsequently based.\"",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In the late 19th century \"accident insurance\" began to become available. The first company to offer accident insurance was the Railway Passengers Assurance Company, formed in 1848 in England to insure against the rising number of fatalities on the nascent railway system.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The first international insurance rule was the York Antwerp Rules (YAR) for the distribution of costs between ship and cargo in the event of general average. In 1873 the \"Association for the Reform and Codification of the Law of Nations\", the forerunner of the International Law Association (ILA), was founded in Brussels. It published the first YAR in 1890, before switching to the present title of the \"International Law Association\" in 1895.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "By the late 19th century governments began to initiate national insurance programs against sickness and old age. Germany built on a tradition of welfare programs in Prussia and Saxony that began as early as in the 1840s. In the 1880s Chancellor Otto von Bismarck introduced old age pensions, accident insurance and medical care that formed the basis for Germany's welfare state. In Britain more extensive legislation was introduced by the Liberal government in the 1911 National Insurance Act. This gave the British working classes the first contributory system of insurance against illness and unemployment. This system was greatly expanded after the Second World War under the influence of the Beveridge Report, to form the first modern welfare state.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 2008, the International Network of Insurance Associations (INIA), then an informal network, became active and it has been succeeded by the Global Federation of Insurance Associations (GFIA), which was formally founded in 2012 to aim to increase insurance industry effectiveness in providing input to international regulatory bodies and to contribute more effectively to the international dialogue on issues of common interest. It consists of its 40 member associations and 1 observer association in 67 countries, which companies account for around 89% of total insurance premiums worldwide.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Insurance involves pooling funds from many insured entities (known as exposures) to pay for the losses that only some insureds may incur. The insured entities are therefore protected from risk for a fee, with the fee being dependent upon the frequency and severity of the event occurring. In order to be an insurable risk, the risk insured against must meet certain characteristics. Insurance as a financial intermediary is a commercial enterprise and a major part of the financial services industry, but individual entities can also self-insure through saving money for possible future losses.",
"title": "Principles"
},
{
"paragraph_id": 20,
"text": "Risk which can be insured by private companies typically share seven common characteristics:",
"title": "Principles"
},
{
"paragraph_id": 21,
"text": "When a company insures an individual entity, there are basic legal requirements and regulations. Several commonly cited legal principles of insurance include:",
"title": "Principles"
},
{
"paragraph_id": 22,
"text": "To \"indemnify\" means to make whole again, or to be reinstated to the position that one was in, to the extent possible, prior to the happening of a specified event or peril. Accordingly, life insurance is generally not considered to be indemnity insurance, but rather \"contingent\" insurance (i.e., a claim arises on the occurrence of a specified event). There are generally three types of insurance contracts that seek to indemnify an insured:",
"title": "Principles"
},
{
"paragraph_id": 23,
"text": "From an insured's standpoint, the result is usually the same: the insurer pays the loss and claims expenses.",
"title": "Principles"
},
{
"paragraph_id": 24,
"text": "If the Insured has a \"reimbursement\" policy, the insured can be required to pay for a loss and then be \"reimbursed\" by the insurance carrier for the loss and out of pocket costs including, with the permission of the insurer, claim expenses.",
"title": "Principles"
},
{
"paragraph_id": 25,
"text": "Under a \"pay on behalf\" policy, the insurance carrier would defend and pay a claim on behalf of the insured who would not be out of pocket for anything. Most modern liability insurance is written on the basis of \"pay on behalf\" language, which enables the insurance carrier to manage and control the claim.",
"title": "Principles"
},
{
"paragraph_id": 26,
"text": "Under an \"indemnification\" policy, the insurance carrier can generally either \"reimburse\" or \"pay on behalf of\", whichever is more beneficial to it and the insured in the claim handling process.",
"title": "Principles"
},
{
"paragraph_id": 27,
"text": "An entity seeking to transfer risk (an individual, corporation, or association of any type, etc.) becomes the \"insured\" party once risk is assumed by an \"insurer\", the insuring party, by means of a contract, called an insurance policy. Generally, an insurance contract includes, at a minimum, the following elements: identification of participating parties (the insurer, the insured, the beneficiaries), the premium, the period of coverage, the particular loss event covered, the amount of coverage (i.e., the amount to be paid to the insured or beneficiary in the event of a loss), and exclusions (events not covered). An insured is thus said to be \"indemnified\" against the loss covered in the policy.",
"title": "Principles"
},
{
"paragraph_id": 28,
"text": "When insured parties experience a loss for a specified peril, the coverage entitles the policyholder to make a claim against the insurer for the covered amount of loss as specified by the policy. The fee paid by the insured to the insurer for assuming the risk is called the premium. Insurance premiums from many insureds are used to fund accounts reserved for later payment of claims – in theory for a relatively few claimants – and for overhead costs. So long as an insurer maintains adequate funds set aside for anticipated losses (called reserves), the remaining margin is an insurer's profit.",
"title": "Principles"
},
{
"paragraph_id": 29,
"text": "Policies typically include a number of exclusions, for example:",
"title": "Principles"
},
{
"paragraph_id": 30,
"text": "Insurers may prohibit certain activities which are considered dangerous and therefore excluded from coverage. One system for classifying activities according to whether they are authorised by insurers refers to \"green light\" approved activities and events, \"yellow light\" activities and events which require insurer consultation and/or waivers of liability, and \"red light\" activities and events which are prohibited and outside the scope of insurance cover.",
"title": "Principles"
},
{
"paragraph_id": 31,
"text": "Insurance can have various effects on society through the way that it changes who bears the cost of losses and damage. On one hand it can increase fraud; on the other it can help societies and individuals prepare for catastrophes and mitigate the effects of catastrophes on both households and societies.",
"title": "Social effects"
},
{
"paragraph_id": 32,
"text": "Insurance can influence the probability of losses through moral hazard, insurance fraud, and preventive steps by the insurance company. Insurance scholars have typically used moral hazard to refer to the increased loss due to unintentional carelessness and insurance fraud to refer to increased risk due to intentional carelessness or indifference. Insurers attempt to address carelessness through inspections, policy provisions requiring certain types of maintenance, and possible discounts for loss mitigation efforts. While in theory insurers could encourage investment in loss reduction, some commentators have argued that in practice insurers had historically not aggressively pursued loss control measures—particularly to prevent disaster losses such as hurricanes—because of concerns over rate reductions and legal battles. However, since about 1996 insurers have begun to take a more active role in loss mitigation, such as through building codes.",
"title": "Social effects"
},
{
"paragraph_id": 33,
"text": "According to the study books of The Chartered Insurance Institute, there are variant methods of insurance as follows:",
"title": "Social effects"
},
{
"paragraph_id": 34,
"text": "Insurers may use the subscription business model, collecting premium payments periodically in return for on-going and/or compounding benefits offered to policyholders.",
"title": "Insurers' business model"
},
{
"paragraph_id": 35,
"text": "Insurers' business model aims to collect more in premium and investment income than is paid out in losses, and to also offer a competitive price which consumers will accept. Profit can be reduced to a simple equation:",
"title": "Insurers' business model"
},
{
"paragraph_id": 36,
"text": "Insurers make money in two ways:",
"title": "Insurers' business model"
},
{
"paragraph_id": 37,
"text": "The most complicated aspect of insuring is the actuarial science of ratemaking (price-setting) of policies, which uses statistics and probability to approximate the rate of future claims based on a given risk. After producing rates, the insurer will use discretion to reject or accept risks through the underwriting process.",
"title": "Insurers' business model"
},
{
"paragraph_id": 38,
"text": "At the most basic level, initial rate-making involves looking at the frequency and severity of insured perils and the expected average payout resulting from these perils. Thereafter an insurance company will collect historical loss-data, bring the loss data to present value, and compare these prior losses to the premium collected in order to assess rate adequacy. Loss ratios and expense loads are also used. Rating for different risk characteristics involves—at the most basic level—comparing the losses with \"loss relativities\"—a policy with twice as many losses would, therefore, be charged twice as much. More complex multivariate analyses are sometimes used when multiple characteristics are involved and a univariate analysis could produce confounded results. Other statistical methods may be used in assessing the probability of future losses.",
"title": "Insurers' business model"
},
{
"paragraph_id": 39,
"text": "Upon termination of a given policy, the amount of premium collected minus the amount paid out in claims is the insurer's underwriting profit on that policy. Underwriting performance is measured by something called the \"combined ratio\", which is the ratio of expenses/losses to premiums. A combined ratio of less than 100% indicates an underwriting profit, while anything over 100 indicates an underwriting loss. A company with a combined ratio over 100% may nevertheless remain profitable due to investment earnings.",
"title": "Insurers' business model"
},
{
"paragraph_id": 40,
"text": "Insurance companies earn investment profits on \"float\". Float, or available reserve, is the amount of money on hand at any given moment that an insurer has collected in insurance premiums but has not paid out in claims. Insurers start investing insurance premiums as soon as they are collected and continue to earn interest or other income on them until claims are paid out. The Association of British Insurers (grouping together 400 insurance companies and 94% of UK insurance services) has almost 20% of the investments in the London Stock Exchange. In 2007, U.S. industry profits from float totaled $58 billion. In a 2009 letter to investors, Warren Buffett wrote, \"we were paid $2.8 billion to hold our float in 2008\".",
"title": "Insurers' business model"
},
{
"paragraph_id": 41,
"text": "In the United States, the underwriting loss of property and casualty insurance companies was $142.3 billion in the five years ending 2003. But overall profit for the same period was $68.4 billion, as the result of float. Some insurance-industry insiders, most notably Hank Greenberg, do not believe that it is possible to sustain a profit from float forever without an underwriting profit as well, but this opinion is not universally held. Reliance on float for profit has led some industry experts to call insurance companies \"investment companies that raise the money for their investments by selling insurance\".",
"title": "Insurers' business model"
},
{
"paragraph_id": 42,
"text": "Naturally, the float method is difficult to carry out in an economically depressed period. Bear markets do cause insurers to shift away from investments and to toughen up their underwriting standards, so a poor economy generally means high insurance-premiums. This tendency to swing between profitable and unprofitable periods over time is commonly known as the underwriting, or insurance, cycle.",
"title": "Insurers' business model"
},
{
"paragraph_id": 43,
"text": "Claims and loss handling is the materialized utility of insurance; it is the actual \"product\" paid for. Claims may be filed by insureds directly with the insurer or through brokers or agents. The insurer may require that the claim be filed on its own proprietary forms, or may accept claims on a standard industry form, such as those produced by ACORD.",
"title": "Insurers' business model"
},
{
"paragraph_id": 44,
"text": "Insurance-company claims departments employ a large number of claims adjusters, supported by a staff of records-management and data-entry clerks. Incoming claims are classified based on severity and are assigned to adjusters, whose settlement authority varies with their knowledge and experience. An adjuster undertakes an investigation of each claim, usually in close cooperation with the insured, determines if coverage is available under the terms of the insurance contract (and if so, the reasonable monetary value of the claim), and authorizes payment.",
"title": "Insurers' business model"
},
{
"paragraph_id": 45,
"text": "Policyholders may hire their own public adjusters to negotiate settlements with the insurance company on their behalf. For policies that are complicated, where claims may be complex, the insured may take out a separate insurance-policy add-on, called loss-recovery insurance, which covers the cost of a public adjuster in the case of a claim.",
"title": "Insurers' business model"
},
{
"paragraph_id": 46,
"text": "Adjusting liability-insurance claims is particularly difficult because they involve a third party, the plaintiff, who is under no contractual obligation to cooperate with the insurer and may in fact regard the insurer as a deep pocket. The adjuster must obtain legal counsel for the insured—either inside (\"house\") counsel or outside (\"panel\") counsel, monitor litigation that may take years to complete, and appear in person or over the telephone with settlement authority at a mandatory settlement-conference when requested by a judge.",
"title": "Insurers' business model"
},
{
"paragraph_id": 47,
"text": "If a claims adjuster suspects under-insurance, the condition of average may come into play to limit the insurance company's exposure.",
"title": "Insurers' business model"
},
{
"paragraph_id": 48,
"text": "In managing the claims-handling function, insurers seek to balance the elements of customer satisfaction, administrative handling expenses, and claims overpayment leakages. In addition to this balancing act, fraudulent insurance practices are a major business risk that insurers must manage and overcome. Disputes between insurers and insureds over the validity of claims or claims-handling practices occasionally escalate into litigation (see insurance bad faith).",
"title": "Insurers' business model"
},
{
"paragraph_id": 49,
"text": "Insurers will often use insurance agents to initially market or underwrite their customers. Agents can be captive, meaning they write only for one company, or independent, meaning that they can issue policies from several companies. The existence and success of companies using insurance agents is likely due to the availability of improved and personalised services. Companies also use Broking firms, Banks and other corporate entities (like Self Help Groups, Microfinance Institutions, NGOs, etc.) to market their products.",
"title": "Insurers' business model"
},
{
"paragraph_id": 50,
"text": "Any risk that can be quantified can potentially be insured. Specific kinds of risk that may give rise to claims are known as perils. An insurance policy will set out in detail which perils are covered by the policy and which are not. Below are non-exhaustive lists of the many different types of insurance that exist. A single policy may cover risks in one or more of the categories set out below. For example, vehicle insurance would typically cover both the property risk (theft or damage to the vehicle) and the liability risk (legal claims arising from an accident). A home insurance policy in the United States typically includes coverage for damage to the home and the owner's belongings, certain legal claims against the owner, and even a small amount of coverage for medical expenses of guests who are injured on the owner's property.",
"title": "Types"
},
{
"paragraph_id": 51,
"text": "Business insurance can take a number of different forms, such as the various kinds of professional liability insurance, also called professional indemnity (PI), which are discussed below under that name; and the business owner's policy (BOP), which packages into one policy many of the kinds of coverage that a business owner needs, in a way analogous to how homeowners' insurance packages the coverages that a homeowner needs.",
"title": "Types"
},
{
"paragraph_id": 52,
"text": "Vehicle insurance protects the policyholder against financial loss in the event of an incident involving a vehicle they own, such as in a traffic collision.",
"title": "Types"
},
{
"paragraph_id": 53,
"text": "Coverage typically includes:",
"title": "Types"
},
{
"paragraph_id": 54,
"text": "Gap insurance covers the excess amount on an auto loan in an instance where the policyholder's insurance company does not cover the entire loan. Depending on the company's specific policies it might or might not cover the deductible as well. This coverage is marketed for those who put low down payments, have high interest rates on their loans, and those with 60-month or longer terms. Gap insurance is typically offered by a finance company when the vehicle owner purchases their vehicle, but many auto insurance companies offer this coverage to consumers as well.",
"title": "Types"
},
{
"paragraph_id": 55,
"text": "Health insurance policies cover the cost of medical treatments. Dental insurance, like medical insurance, protects policyholders for dental costs. In most developed countries, all citizens receive some health coverage from their governments, paid through taxation. In most countries, health insurance is often part of an employer's benefits.",
"title": "Types"
},
{
"paragraph_id": 56,
"text": "Casualty insurance insures against accidents, not necessarily tied to any specific property. It is a broad spectrum of insurance that a number of other types of insurance could be classified, such as auto, workers compensation, and some liability insurances.",
"title": "Types"
},
{
"paragraph_id": 57,
"text": "Life insurance provides a monetary benefit to a decedent's family or other designated beneficiary, and may specifically provide for income to an insured person's family, burial, funeral and other final expenses. Life insurance policies often allow the option of having the proceeds paid to the beneficiary either in a lump sum cash payment or an annuity. In most states, a person cannot purchase a policy on another person without their knowledge.",
"title": "Types"
},
{
"paragraph_id": 58,
"text": "Annuities provide a stream of payments and are generally classified as insurance because they are issued by insurance companies, are regulated as insurance, and require the same kinds of actuarial and investment management expertise that life insurance requires. Annuities and pensions that pay a benefit for life are sometimes regarded as insurance against the possibility that a retiree will outlive his or her financial resources. In that sense, they are the complement of life insurance and, from an underwriting perspective, are the mirror image of life insurance.",
"title": "Types"
},
{
"paragraph_id": 59,
"text": "Certain life insurance contracts accumulate cash values, which may be taken by the insured if the policy is surrendered or which may be borrowed against. Some policies, such as annuities and endowment policies, are financial instruments to accumulate or liquidate wealth when it is needed.",
"title": "Types"
},
{
"paragraph_id": 60,
"text": "In many countries, such as the United States and the UK, the tax law provides that the interest on this cash value is not taxable under certain circumstances. This leads to widespread use of life insurance as a tax-efficient method of saving as well as protection in the event of early death.",
"title": "Types"
},
{
"paragraph_id": 61,
"text": "In the United States, the tax on interest income on life insurance policies and annuities is generally deferred. However, in some cases the benefit derived from tax deferral may be offset by a low return. This depends upon the insuring company, the type of policy and other variables (mortality, market return, etc.). Moreover, other income tax saving vehicles (e.g., IRAs, 401(k) plans, Roth IRAs) may be better alternatives for value accumulation.",
"title": "Types"
},
{
"paragraph_id": 62,
"text": "Burial insurance is an old type of life insurance which is paid out upon death to cover final expenses, such as the cost of a funeral. The Greeks and Romans introduced burial insurance c. 600 CE when they organized guilds called \"benevolent societies\" which cared for the surviving families and paid funeral expenses of members upon death. Guilds in the Middle Ages served a similar purpose, as did friendly societies during Victorian times.",
"title": "Types"
},
{
"paragraph_id": 63,
"text": "Property insurance provides protection against risks to property, such as fire, theft or weather damage. This may include specialized forms of insurance such as fire insurance, flood insurance, earthquake insurance, home insurance, inland marine insurance or boiler insurance. The term property insurance may, like casualty insurance, be used as a broad category of various subtypes of insurance, some of which are listed below:",
"title": "Types"
},
{
"paragraph_id": 64,
"text": "Liability insurance is a broad superset that covers legal claims against the insured. Many types of insurance include an aspect of liability coverage. For example, a homeowner's insurance policy will normally include liability coverage which protects the insured in the event of a claim brought by someone who slips and falls on the property; automobile insurance also includes an aspect of liability insurance that indemnifies against the harm that a crashing car can cause to others' lives, health, or property. The protection offered by a liability insurance policy is twofold: a legal defense in the event of a lawsuit commenced against the policyholder and indemnification (payment on behalf of the insured) with respect to a settlement or court verdict. Liability policies typically cover only the negligence of the insured, and will not apply to results of wilful or intentional acts by the insured.",
"title": "Types"
},
{
"paragraph_id": 65,
"text": "Often a commercial insured's liability insurance program consists of several layers. The first layer of insurance generally consists of primary insurance, which provides first dollar indemnity for judgments and settlements up to the limits of liability of the primary policy. Generally, primary insurance is subject to a deductible and obligates the insurer to defend the insured against lawsuits, which is normally accomplished by assigning counsel to defend the insured. In many instances, a commercial insured may elect to self-insure. Above the primary insurance or self-insured retention, the insured may have one or more layers of excess insurance to provide coverage additional limits of indemnity protection. There are a variety of types of excess insurance, including \"stand-alone\" excess policies (policies that contain their own terms, conditions, and exclusions), \"follow form\" excess insurance (policies that follow the terms of the underlying policy except as specifically provided), and \"umbrella\" insurance policies (excess insurance that in some circumstances could provide coverage that is broader than the underlying insurance).",
"title": "Types"
},
{
"paragraph_id": 66,
"text": "Credit insurance repays some or all of a loan when the borrower is insolvent.",
"title": "Types"
},
{
"paragraph_id": 67,
"text": "Cyber-insurance is a business lines insurance product intended to provide coverage to corporations from Internet-based risks, and more generally from risks relating to information technology infrastructure, information privacy, information governance liability, and activities related thereto.",
"title": "Types"
},
{
"paragraph_id": 68,
"text": "Some communities prefer to create virtual insurance among themselves by other means than contractual risk transfer, which assigns explicit numerical values to risk. A number of religious groups, including the Amish and some Muslim groups, depend on support provided by their communities when disasters strike. The risk presented by any given person is assumed collectively by the community who all bear the cost of rebuilding lost property and supporting people whose needs are suddenly greater after a loss of some kind. In supportive communities where others can be trusted to follow community leaders, this tacit form of insurance can work. In this manner the community can even out the extreme differences in insurability that exist among its members. Some further justification is also provided by invoking the moral hazard of explicit insurance contracts.",
"title": "Types"
},
{
"paragraph_id": 69,
"text": "In the United Kingdom, The Crown (which, for practical purposes, meant the civil service) did not insure property such as government buildings. If a government building was damaged, the cost of repair would be met from public funds because, in the long run, this was cheaper than paying insurance premiums. Since many UK government buildings have been sold to property companies and rented back, this arrangement is now less common.",
"title": "Types"
},
{
"paragraph_id": 70,
"text": "In the United States, the most prevalent form of self-insurance is governmental risk management pools. They are self-funded cooperatives, operating as carriers of coverage for the majority of governmental entities today, such as county governments, municipalities, and school districts. Rather than these entities independently self-insure and risk bankruptcy from a large judgment or catastrophic loss, such governmental entities form a risk pool. Such pools begin their operations by capitalization through member deposits or bond issuance. Coverage (such as general liability, auto liability, professional liability, workers compensation, and property) is offered by the pool to its members, similar to coverage offered by insurance companies. However, self-insured pools offer members lower rates (due to not needing insurance brokers), increased benefits (such as loss prevention services) and subject matter expertise. Of approximately 91,000 distinct governmental entities operating in the United States, 75,000 are members of self-insured pools in various lines of coverage, forming approximately 500 pools. Although a relatively small corner of the insurance market, the annual contributions (self-insured premiums) to such pools have been estimated up to 17 billion dollars annually.",
"title": "Types"
},
{
"paragraph_id": 71,
"text": "Insurance companies may provide any combination of insurance types, but are often classified into three groups:",
"title": "Insurance companies"
},
{
"paragraph_id": 72,
"text": "General insurance companies can be further divided into these sub categories.",
"title": "Insurance companies"
},
{
"paragraph_id": 73,
"text": "In most countries, life and non-life insurers are subject to different regulatory regimes and different tax and accounting rules. The main reason for the distinction between the two types of company is that life, annuity, and pension business is long-term in nature – coverage for life assurance or a pension can cover risks over many decades. By contrast, non-life insurance cover usually covers a shorter period, such as one year.",
"title": "Insurance companies"
},
{
"paragraph_id": 74,
"text": "Insurance companies are commonly classified as either mutual or proprietary companies. Mutual companies are owned by the policyholders, while shareholders (who may or may not own policies) own proprietary insurance companies.",
"title": "Insurance companies"
},
{
"paragraph_id": 75,
"text": "Demutualization of mutual insurers to form stock companies, as well as the formation of a hybrid known as a mutual holding company, became common in some countries, such as the United States, in the late 20th century. However, not all states permit mutual holding companies.",
"title": "Insurance companies"
},
{
"paragraph_id": 76,
"text": "Reinsurance companies are insurance companies that provide policies to other insurance companies, allowing them to reduce their risks and protect themselves from substantial losses. The reinsurance market is dominated by a few large companies with huge reserves. A reinsurer may also be a direct writer of insurance risks as well.",
"title": "Insurance companies"
},
{
"paragraph_id": 77,
"text": "Captive insurance companies can be defined as limited-purpose insurance companies established with the specific objective of financing risks emanating from their parent group or groups. This definition can sometimes be extended to include some of the risks of the parent company's customers. In short, it is an in-house self-insurance vehicle. Captives may take the form of a \"pure\" entity, which is a 100% subsidiary of the self-insured parent company; of a \"mutual\" captive, which insures the collective risks of members of an industry; and of an \"association\" captive, which self-insures individual risks of the members of a professional, commercial or industrial association. Captives represent commercial, economic and tax advantages to their sponsors because of the reductions in costs they help create and for the ease of insurance risk management and the flexibility for cash flows they generate. Additionally, they may provide coverage of risks which is neither available nor offered in the traditional insurance market at reasonable prices.",
"title": "Insurance companies"
},
{
"paragraph_id": 78,
"text": "The types of risk that a captive can underwrite for their parents include property damage, public and product liability, professional indemnity, employee benefits, employers' liability, motor and medical aid expenses. The captive's exposure to such risks may be limited by the use of reinsurance.",
"title": "Insurance companies"
},
{
"paragraph_id": 79,
"text": "Captives are becoming an increasingly important component of the risk management and risk financing strategy of their parent. This can be understood against the following background:",
"title": "Insurance companies"
},
{
"paragraph_id": 80,
"text": "Other possible forms for an insurance company include reciprocals, in which policyholders reciprocate in sharing risks, and Lloyd's organizations.",
"title": "Insurance companies"
},
{
"paragraph_id": 81,
"text": "Admitted insurance companies are those in the United States that have been admitted or licensed by the state licensing agency. The insurance they provide is called admitted insurance. Non-admitted companies have not been approved by the state licensing agency, but are allowed to provide insurance under special circumstances when they meet an insurance need that admitted companies cannot or will not meet.",
"title": "Insurance companies"
},
{
"paragraph_id": 82,
"text": "There are also companies known as \"insurance consultants\". Like a mortgage broker, these companies are paid a fee by the customer to shop around for the best insurance policy among many companies. Similar to an insurance consultant, an \"insurance broker\" also shops around for the best insurance policy among many companies. However, with insurance brokers, the fee is usually paid in the form of commission from the insurer that is selected rather than directly from the client.",
"title": "Insurance companies"
},
{
"paragraph_id": 83,
"text": "Neither insurance consultants nor insurance brokers are insurance companies and no risks are transferred to them in insurance transactions. Third party administrators are companies that perform underwriting and sometimes claims handling services for insurance companies. These companies often have special expertise that the insurance companies do not have.",
"title": "Insurance companies"
},
{
"paragraph_id": 84,
"text": "The financial stability and strength of an insurance company is a consideration when buying an insurance contract. An insurance premium paid currently provides coverage for losses that might arise many years in the future. For that reason, a more financially stable insurance carrier reduces the risk of the insurance company becoming insolvent, leaving their policyholders with no coverage (or coverage only from a government-backed insurance pool or other arrangements with less attractive payouts for losses). A number of independent rating agencies provide information and rate the financial viability of insurance companies.",
"title": "Insurance companies"
},
{
"paragraph_id": 85,
"text": "Insurance companies are rated by various agencies such as AM Best. The ratings include the company's financial strength, which measures its ability to pay claims. It also rates financial instruments issued by the insurance company, such as bonds, notes, and securitization products.",
"title": "Insurance companies"
},
{
"paragraph_id": 86,
"text": "Advanced economies account for the bulk of the global insurance industry. According to Swiss Re, the global insurance market wrote $6.287 trillion in direct premiums in 2020. (\"Direct premiums\" means premiums written directly by insurers before accounting for ceding of risk to reinsurers.) As usual, the United States was the country with the largest insurance market with $2.530 trillion (40.3%) of direct premiums written, with the People's Republic of China coming in second at only $574 billion (9.3%), Japan coming in third at $438 billion (7.1%), and the United Kingdom coming in fourth at $380 billion (6.2%). However, the European Union's single market is the actual second largest market, with 18 percent market share.",
"title": "Across the world"
},
{
"paragraph_id": 87,
"text": "In the United States, insurance is regulated by the states under the McCarran–Ferguson Act, with \"periodic proposals for federal intervention\", and a nonprofit coalition of state insurance agencies called the National Association of Insurance Commissioners works to harmonize the country's different laws and regulations. The National Conference of Insurance Legislators (NCOIL) also works to harmonize the different state laws.",
"title": "Across the world"
},
{
"paragraph_id": 88,
"text": "In the European Union, the Third Non-Life Directive and the Third Life Directive, both passed in 1992 and effective 1994, created a single insurance market in Europe and allowed insurance companies to offer insurance anywhere in the EU (subject to permission from authority in the head office) and allowed insurance consumers to purchase insurance from any insurer in the EU. As far as insurance in the United Kingdom, the Financial Services Authority took over insurance regulation from the General Insurance Standards Council in 2005; laws passed include the Insurance Companies Act 1973 and another in 1982, and reforms to warranty and other aspects under discussion as of 2012.",
"title": "Across the world"
},
{
"paragraph_id": 89,
"text": "The insurance industry in China was nationalized in 1949 and thereafter offered by only a single state-owned company, the People's Insurance Company of China, which was eventually suspended as demand declined in a communist environment. In 1978, market reforms led to an increase in the market and by 1995 a comprehensive Insurance Law of the People's Republic of China was passed, followed in 1998 by the formation of China Insurance Regulatory Commission (CIRC), which has broad regulatory authority over the insurance market of China.",
"title": "Across the world"
},
{
"paragraph_id": 90,
"text": "In India IRDA is insurance regulatory authority. As per the section 4 of IRDA Act 1999, Insurance Regulatory and Development Authority (IRDA), which was constituted by an act of parliament. National Insurance Academy, Pune is apex insurance capacity builder institute promoted with support from Ministry of Finance and by LIC, Life & General Insurance companies.",
"title": "Across the world"
},
{
"paragraph_id": 91,
"text": "In 2017, within the framework of the joint project of the Bank of Russia and Yandex, a special check mark (a green circle with a tick and 'Реестр ЦБ РФ' (Unified state register of insurance entities) text box) appeared in the search for Yandex system, informing the consumer that the company's financial services are offered on the marked website, which has the status of an insurance company, a broker or a mutual insurance association.",
"title": "Across the world"
},
{
"paragraph_id": 92,
"text": "Insurance is just a risk transfer mechanism wherein the financial burden which may arise due to some fortuitous event is transferred to a bigger entity (i.e., an insurance company) by way of paying premiums. This only reduces the financial burden and not the actual chances of happening of an event. Insurance is a risk for both the insurance company and the insured. The insurance company understands the risk involved and will perform a risk assessment when writing the policy.",
"title": "Controversies"
},
{
"paragraph_id": 93,
"text": "As a result, the premiums may go up if they determine that the policyholder will file a claim. However, premiums might reduce if the policyholder commits to a risk management program as recommended by the insurer. It is therefore important that insurers view risk management as a joint initiative between policyholder and insurer since a robust risk management plan minimizes the possibility of a large claim for the insurer while stabilizing or reducing premiums for the policyholder.",
"title": "Controversies"
},
{
"paragraph_id": 94,
"text": "If a person is financially stable and plans for life's unexpected events, they may be able to go without insurance. However, they must have enough to cover a total and complete loss of employment and of their possessions. Some states will accept a surety bond, a government bond, or even making a cash deposit with the state.",
"title": "Controversies"
},
{
"paragraph_id": 95,
"text": "An insurance company may inadvertently find that its insureds may not be as risk-averse as they might otherwise be (since, by definition, the insured has transferred the risk to the insurer), a concept known as moral hazard. This 'insulates' many from the true costs of living with risk, negating measures that can mitigate or adapt to risk and leading some to describe insurance schemes as potentially maladaptive.",
"title": "Controversies"
},
{
"paragraph_id": 96,
"text": "Insurance policies can be complex and some policyholders may not understand all the fees and coverages included in a policy. As a result, people may buy policies on unfavorable terms. In response to these issues, many countries have enacted detailed statutory and regulatory regimes governing every aspect of the insurance business, including minimum standards for policies and the ways in which they may be advertised and sold.",
"title": "Controversies"
},
{
"paragraph_id": 97,
"text": "For example, most insurance policies in the English language today have been carefully drafted in plain English; the industry learned the hard way that many courts will not enforce policies against insureds when the judges themselves cannot understand what the policies are saying. Typically, courts construe ambiguities in insurance policies against the insurance company and in favor of coverage under the policy.",
"title": "Controversies"
},
{
"paragraph_id": 98,
"text": "Many institutional insurance purchasers buy insurance through an insurance broker. While on the surface it appears the broker represents the buyer (not the insurance company), and typically counsels the buyer on appropriate coverage and policy limitations, in the vast majority of cases a broker's compensation comes in the form of a commission as a percentage of the insurance premium, creating a conflict of interest in that the broker's financial interest is tilted toward encouraging an insured to purchase more insurance than might be necessary at a higher price. A broker generally holds contracts with many insurers, thereby allowing the broker to \"shop\" the market for the best rates and coverage possible.",
"title": "Controversies"
},
{
"paragraph_id": 99,
"text": "Insurance may also be purchased through an agent. A tied agent, working exclusively with one insurer, represents the insurance company from whom the policyholder buys (while a free agent sells policies of various insurance companies). Just as there is a potential conflict of interest with a broker, an agent has a different type of conflict. Because agents work directly for the insurance company, if there is a claim the agent may advise the client to the benefit of the insurance company. Agents generally cannot offer as broad a range of selection compared to an insurance broker.",
"title": "Controversies"
},
{
"paragraph_id": 100,
"text": "An independent insurance consultant advises insureds on a fee-for-service retainer, similar to an attorney, and thus offers completely independent advice, free of the financial conflict of interest of brokers or agents. However, such a consultant must still work through brokers or agents in order to secure coverage for their clients.",
"title": "Controversies"
},
{
"paragraph_id": 101,
"text": "In the United States, economists and consumer advocates generally consider insurance to be worthwhile for low-probability, catastrophic losses, but not for high-probability, small losses. Because of this, consumers are advised to select high deductibles and to not insure losses which would not cause a disruption in their life. However, consumers have shown a tendency to prefer low deductibles and to prefer to insure relatively high-probability, small losses over low-probability, perhaps due to not understanding or ignoring the low-probability risk. This is associated with reduced purchasing of insurance against low-probability losses, and may result in increased inefficiencies from moral hazard.",
"title": "Controversies"
},
{
"paragraph_id": 102,
"text": "Redlining is the practice of denying insurance coverage in specific geographic areas, supposedly because of a high likelihood of loss, while the alleged motivation is unlawful discrimination. Racial profiling or redlining has a long history in the property insurance industry in the United States. From a review of industry underwriting and marketing materials, court documents, and research by government agencies, industry and community groups, and academics, it is clear that race has long affected and continues to affect the policies and practices of the insurance industry.",
"title": "Controversies"
},
{
"paragraph_id": 103,
"text": "In July 2007, the US Federal Trade Commission (FTC) released a report presenting the results of a study concerning credit-based insurance scores in automobile insurance. The study found that these scores are effective predictors of risk. It also showed that African-Americans and Hispanics are substantially overrepresented in the lowest credit scores, and substantially underrepresented in the highest, while Caucasians and Asians are more evenly spread across the scores. The credit scores were also found to predict risk within each of the ethnic groups, leading the FTC to conclude that the scoring models are not solely proxies for redlining. The FTC indicated little data was available to evaluate benefit of insurance scores to consumers. The report was disputed by representatives of the Consumer Federation of America, the National Fair Housing Alliance, the National Consumer Law Center, and the Center for Economic Justice, for relying on data provided by the insurance industry.",
"title": "Controversies"
},
{
"paragraph_id": 104,
"text": "All states have provisions in their rate regulation laws or in their fair trade practice acts that prohibit unfair discrimination, often called redlining, in setting rates and making insurance available.",
"title": "Controversies"
},
{
"paragraph_id": 105,
"text": "In determining premiums and premium rate structures, insurers consider quantifiable factors, including location, credit scores, gender, occupation, marital status, and education level. However, the use of such factors is often considered to be unfair or unlawfully discriminatory, and the reaction against this practice has in some instances led to political disputes about the ways in which insurers determine premiums and regulatory intervention to limit the factors used.",
"title": "Controversies"
},
{
"paragraph_id": 106,
"text": "An insurance underwriter's job is to evaluate a given risk as to the likelihood that a loss will occur. Any factor that causes a greater likelihood of loss should theoretically be charged a higher rate. This basic principle of insurance must be followed if insurance companies are to remain solvent. Thus, \"discrimination\" against (i.e., negative differential treatment of) potential insureds in the risk evaluation and premium-setting process is a necessary by-product of the fundamentals of insurance underwriting. For instance, insurers charge older people significantly higher premiums than they charge younger people for term life insurance. Older people are thus treated differently from younger people (i.e., a distinction is made, discrimination occurs). The rationale for the differential treatment goes to the heart of the risk a life insurer takes: older people are likely to die sooner than young people, so the risk of loss (the insured's death) is greater in any given period of time and therefore the risk premium must be higher to cover the greater risk. However, treating insureds differently when there is no actuarially sound reason for doing so is unlawful discrimination.",
"title": "Controversies"
},
{
"paragraph_id": 107,
"text": "New assurance products can now be protected from copying with a business method patent in the United States.",
"title": "Controversies"
},
{
"paragraph_id": 108,
"text": "A recent example of a new insurance product that is patented is Usage Based auto insurance. Early versions were independently invented and patented by a major US auto insurance company, Progressive Auto Insurance (U.S. Patent 5,797,134) and a Spanish independent inventor, Salvador Minguijon Perez.",
"title": "Controversies"
},
{
"paragraph_id": 109,
"text": "Many independent inventors are in favor of patenting new insurance products since it gives them protection from big companies when they bring their new insurance products to market. Independent inventors account for 70% of the new U.S. patent applications in this area.",
"title": "Controversies"
},
{
"paragraph_id": 110,
"text": "Many insurance executives are opposed to patenting insurance products because it creates a new risk for them. The Hartford insurance company, for example, recently had to pay $80 million to an independent inventor, Bancorp Services, in order to settle a patent infringement and theft of trade secret lawsuit for a type of corporate owned life insurance product invented and patented by Bancorp.",
"title": "Controversies"
},
{
"paragraph_id": 111,
"text": "There are currently about 150 new patent applications on insurance inventions filed per year in the United States. The rate at which patents have been issued has steadily risen from 15 in 2002 to 44 in 2006.",
"title": "Controversies"
},
{
"paragraph_id": 112,
"text": "The first insurance patent to be granted was including another example of an application posted was. It was posted on 6 March 2009. This patent application describes a method for increasing the ease of changing insurance companies.",
"title": "Controversies"
},
{
"paragraph_id": 113,
"text": "Insurance on demand (also IoD) is an insurance service that provides clients with insurance protection when they need, i.e. only episodic rather than on 24/7 basis as typically provided by traditional insurers (e.g. clients can purchase an insurance for one single flight rather than a longer-lasting travel insurance plan).",
"title": "Controversies"
},
{
"paragraph_id": 114,
"text": "Certain insurance products and practices have been described as rent-seeking by critics. That is, some insurance products or practices are useful primarily because of legal benefits, such as reducing taxes, as opposed to providing protection against risks of adverse events.",
"title": "Controversies"
},
{
"paragraph_id": 115,
"text": "Muslim scholars have varying opinions about life insurance. Life insurance policies that earn interest (or guaranteed bonus/NAV) are generally considered to be a form of riba (usury) and some consider even policies that do not earn interest to be a form of gharar (speculation). Some argue that gharar is not present due to the actuarial science behind the underwriting. Jewish rabbinical scholars also have expressed reservations regarding insurance as an avoidance of God's will but most find it acceptable in moderation.",
"title": "Controversies"
},
{
"paragraph_id": 116,
"text": "Some Christians believe insurance represents a lack of faith and there is a long history of resistance to commercial insurance in Anabaptist communities (Mennonites, Amish, Hutterites, Brethren in Christ) but many participate in community-based self-insurance programs that spread risk within their communities.",
"title": "Controversies"
},
{
"paragraph_id": 117,
"text": "Country-specific articles:",
"title": "See also"
}
]
| Insurance is a means of protection from financial loss in which, in exchange for a fee, a party agrees to compensate another party in the event of a certain loss, damage, or injury. It is a form of risk management, primarily used to hedge against the risk of a contingent or uncertain loss. An entity which provides insurance is known as an insurer, insurance company, insurance carrier, or underwriter. A person or entity who buys insurance is known as a policyholder, while a person or entity covered under the policy is called an insured. The insurance transaction involves the policyholder assuming a guaranteed, known, and relatively small loss in the form of a payment to the insurer in exchange for the insurer's promise to compensate the insured in the event of a covered loss. The loss may or may not be financial, but it must be reducible to financial terms. Furthermore, it usually involves something in which the insured has an insurable interest established by ownership, possession, or pre-existing relationship. The insured receives a contract, called the insurance policy, which details the conditions and circumstances under which the insurer will compensate the insured, or their designated beneficiary or assignee. The amount of money charged by the insurer to the policyholder for the coverage set forth in the insurance policy is called the premium. If the insured experiences a loss which is potentially covered by the insurance policy, the insured submits a claim to the insurer for processing by a claims adjuster. A mandatory out-of-pocket expense required by an insurance policy before an insurer will pay a claim is called a deductible. The insurer may hedge its own risk by taking out reinsurance, whereby another insurance company agrees to carry some of the risks, especially if the primary insurer deems the risk too large for it to carry. | 2001-10-21T21:16:33Z | 2023-12-21T17:05:16Z | [
"Template:Short description",
"Template:Redirect-distinguish",
"Template:Main",
"Template:Cite book",
"Template:Refbegin",
"Template:Refend",
"Template:Isbn",
"Template:As of",
"Template:Better source needed",
"Template:ISBN",
"Template:Cite news",
"Template:Authority control",
"Template:More citations needed",
"Template:Circa",
"Template:Div col end",
"Template:Cite ODNB",
"Template:Cite patent",
"Template:Curlie",
"Template:Insurance",
"Template:-",
"Template:Financial market participants",
"Template:By whom",
"Template:Lang",
"Template:NoteFoot",
"Template:Cite EB1911",
"Template:Citation",
"Template:Webarchive",
"Template:Major insurance companies",
"Template:Risk management",
"Template:Use dmy dates",
"Template:Sister project links",
"Template:NoteTag",
"Template:Citation needed",
"Template:Update",
"Template:Further",
"Template:Cite web",
"Template:US patent application",
"Template:Other uses",
"Template:Div col",
"Template:US patent",
"Template:Reflist",
"Template:Cite journal",
"Template:Industries"
]
| https://en.wikipedia.org/wiki/Insurance |
15,179 | Indira Gandhi | Indira Priyadarshini Gandhi (Hindi: [ˈɪndɪɾɑː ˈɡɑːndʱi] ; née Nehru; 19 November 1917 – 31 October 1984) was an Indian politician and stateswoman who served as the 3rd Prime Minister of India from 1966 to 1977 and again from 1980 until her assassination in 1984. She was India's first and, to date, only female prime minister, and a central figure in Indian politics as the leader of the Indian National Congress. Gandhi was the daughter of Jawaharlal Nehru, the first prime minister of India, and the mother of Rajiv Gandhi, who succeeded her in office as the country's sixth prime minister. Furthermore, Gandhi's cumulative tenure of 15 years and 350 days makes her the second-longest-serving Indian prime minister after her father. Henry Kissinger described her as an "Iron Lady", a nickname that became associated with her tough personality since her lifetime.
During Nehru's premiership from 1947 to 1964, Gandhi served as his hostess and accompanied him on his numerous foreign trips. In 1959, she played a part in the dissolution of the communist-led Kerala state government as then-president of the Indian National Congress, otherwise a ceremonial position to which she was elected earlier that year. Lal Bahadur Shastri, who had succeeded Nehru as prime minister upon his death in 1964, appointed her minister of information and broadcasting in his government; the same year she was elected to the Rajya Sabha, the upper house of the Indian Parliament. On Shastri's sudden death in January 1966, Gandhi defeated her rival, Morarji Desai, in the Congress Party's parliamentary leadership election to become leader and also succeeded Shastri as prime minister. She led the Congress to victory in two subsequent elections, starting with the 1967 general election, in which she was first elected to the lower house of the Indian parliament, the Lok Sabha. In 1971, the Congress Party headed by Gandhi managed to secure its first landslide victory since her father's sweep in 1962, focusing on issues such as poverty. But following the nationwide Emergency implemented by her, she faced massive anti-incumbency and lost the 1977 general election, the first time for the Congress party to do so. Gandhi was ousted from office and even lost her seat in parliament in the election. Nevertheless, her faction of the Congress Party won the next general election by a landslide, due to Gandhi's leadership and weak governance of the Janata Party rule, the first non-Congress government in independent modern India's history.
As prime minister, Gandhi was known for her political intransigence and unprecedented centralization of power. In 1967, she headed a military conflict with China in which India successfully repelled Chinese incursions in the Himalayas. In 1971, she went to war with Pakistan in support of the independence movement and war of independence in East Pakistan, which resulted in an Indian victory and the creation of Bangladesh, as well as increasing India's influence to the point where it became the sole regional power in South Asia. Gandhi's rule saw India grow closer to the Soviet Union by signing a friendship treaty in 1971, with India receiving military, financial, and diplomatic support from the Soviet Union during its conflict with Pakistan in the same year. Despite India being at the forefront of the non-aligned movement, Gandhi led India to become one of the Soviet Union's closest allies in Asia, with India and the Soviet Union often supporting each other in proxy wars and at the United Nations. Citing separatist tendencies and in response to a call for revolution, Gandhi instituted a state of emergency from 1975 to 1977, during which basic civil liberties were suspended and the press was censored. Widespread atrocities were carried out during that period. Gandhi faced the growing Sikh separatism throughout her third premiership; in response, she ordered Operation Blue Star, which involved military action in the Golden Temple and resulted in bloodshed with hundreds of Sikhs killed. On 31 October 1984, Gandhi was assassinated by her bodyguards, both of whom were Sikh nationalists seeking retribution for the events at the temple.
Indira Gandhi is remembered as the most powerful woman in the world during her tenure. Her supporters cite her leadership during victories over geopolitical rivals China and Pakistan, the Green Revolution, a growing economy in the early 1980s, and her anti-poverty campaign that led her to be known as "Mother Indira" (a pun on Mother India) among the country's poor and rural classes. However, critics note her authoritarian rule of India during the Emergency. In 1999, Gandhi was named "Woman of the Millennium" in an online poll organized by the BBC. In 2020, Gandhi was named by Time magazine among the 100 women who defined the past century as counterparts to the magazine's previous choices for Man of the Year.
Indira Gandhi was born Indira Nehru, into a Kashmiri Pandit family on 19 November 1917 in Allahabad. Her father, Jawaharlal Nehru, was a leading figure in the movement for independence from British rule, and became the first Prime Minister of the Dominion (and later Republic) of India. She was the only child (she had a younger brother who died young), and grew up with her mother, Kamala Nehru, at the Anand Bhavan, a large family estate in Allahabad. In 1930, the Nehru family donated the mansion to the Indian National Congress, and renamed it Swaraj Bhavan (lit. abode of freedom). A new mansion was built nearby to serve as the family residence and given the name of the old Anand Bhavan. Indira had a lonely and unhappy childhood. Her father was often away, directing political activities or incarcerated, while her mother was frequently bedridden with illness, and later suffered an early death from tuberculosis. She had limited contact with her father, mostly through letters.
Indira was taught mostly at home by tutors and attended school intermittently until matriculation in 1934. She was a student at the Modern School in Delhi, St. Cecilia's and St. Mary's Convent schools in Allahabad, the International School of Geneva, the Ecole Nouvelle in Bex, and the Pupils' Own School in Poona and Bombay, which is affiliated with the University of Mumbai. She and her mother Kamala moved to the Belur Math headquarters of the Ramakrishna Mission where Swami Ranganathananda was her guardian. She went on to study at the Vishwa Bharati in Santiniketan, which became Visva-Bharati University in 1951. It was during her interview with him that Rabindranath Tagore named her Priyadarshini, literally "looking at everything with kindness" in Sanskrit, and she came to be known as Indira Priyadarshini Nehru. A year later, however, she had to leave university to attend to her ailing mother in Europe. There it was decided that Indira would continue her education at the University of Oxford. After her mother died, she attended the Badminton School for a brief period before enrolling at Somerville College in 1937 to study history. Indira had to take the entrance examination twice, having failed at her first attempt with a poor performance in Latin. At Oxford, she did well in history, political science and economics, but her grades in Latin—a compulsory subject—remained poor. Indira did, however, have an active part within the student life of the university, such as membership in the Oxford Majlis Asian Society.
During her time in Europe, Indira was plagued with ill health and was constantly attended to by doctors. She had to make repeated trips to Switzerland to recover, disrupting her studies. She was being treated there in 1940, when Germany rapidly conquered Europe. Indira tried to return to England through Portugal but was left stranded for nearly two months. She managed to enter England in early 1941, and from there returned to India without completing her studies at Oxford. The university later awarded her an honorary degree. In 2010, Oxford honoured her further by selecting her as one of the ten Oxasians, illustrious Asian graduates from the University of Oxford. During her stay in Britain, Indira frequently met her future husband Feroze Gandhi (no relation to Mahatma Gandhi), whom she knew from Allahabad, and who was studying at the London School of Economics. Their marriage took place in Allahabad according to Adi Dharm rituals, though Feroze belonged to a Zoroastrian Parsi family of Gujarat. The couple had two sons, Rajiv Gandhi (born 1944) and Sanjay Gandhi (born 1946).
On September 1942, Indira was arrested over her role in the Quit India Movement. She was released from jail in April 1943. "Mud entered our souls in the drabness of prison," she later recalled her time in the jail. She added, "When I came out, it was such a shock to see colors again I thought I would go out of my mind."
In the 1950s, Indira, now Mrs. Indira Gandhi after her marriage, served her father unofficially as a personal assistant during his tenure as the first prime minister of India. Towards the end of the 1950s, Gandhi served as the President of the Congress. In that capacity, she was instrumental in getting the communist-led Kerala state government dismissed in 1959. That government was India's first elected communist government. After her father's death in 1964 she was appointed a member of the Rajya Sabha (upper house) and served in Prime Minister Lal Bahadur Shastri's cabinet as Minister of Information and Broadcasting. In January 1966, after Shastri's death, the Congress legislative party elected her over Morarji Desai as their leader. Congress party veteran K. Kamaraj was instrumental in Gandhi achieving victory. Because she was a woman, other political leaders in India saw Gandhi as weak and hoped to use her as a puppet once elected:
Congress President Kamaraj orchestrated Mrs. Gandhi's selection as prime minister because he perceived her to be weak enough that he and the other regional party bosses could control her, and yet strong enough to beat Desai [her political opponent] in a party election because of the high regard for her father ... a woman would be an ideal tool for the Syndicate.
Gandhi's first eleven years serving as prime minister saw her evolve from the perception of Congress party leaders as their puppet, to a strong leader with the iron resolve to split the party over her policy positions, or to go to war with Pakistan to assist Bangladesh in the 1971 liberation war. At the end of 1977, she was such a dominating figure in Indian politics that Congress party president D. K. Barooah had coined the phrase "India is Indira and Indira is India."
Gandhi formed her government with Morarji Desai as deputy prime minister and finance minister. At the beginning of her first term as prime minister, she was widely criticised by the media and the opposition as a "Goongi goodiya" (Hindi for a "dumb doll") of the Congress party bosses who had orchestrated her election and then tried to constrain her. Indira was a reluctant successor to her famed father, although she had accompanied him on several official foreign visits and played an anchor role in bringing down the first democratically elected communist government in Kerala. According to certain sources it was the socialist leader Ram Manohar Lohia that first derided her personality as the "Goongi Goodiya" (Hindi for "dumb doll") that later was echoed by other Congress politicians who were wary of her rise in the party.
One of her first major action was to crush the separatist Mizo National Front uprising in Mizoram in 1966.
The first electoral test for Gandhi was the 1967 general elections for the Lok Sabha and state assemblies. The Congress Party won a reduced majority in the Lok Sabha after these elections owing to widespread disenchantment over the rising prices of commodities, unemployment, economic stagnation and a food crisis. Gandhi was elected to the Lok Sabha from the Raebareli constituency. She had a rocky start after agreeing to devalue the rupee which created hardship for Indian businesses and consumers. The importation of wheat from the United States fell through due to political disputes.
For the first time, the party also lost power or lost its majority in a number of states across the country. Following the 1967 elections, Gandhi gradually began to move towards socialist policies. In 1969, she fell out with senior Congress party leaders over several issues. Chief among them was her decision to support V. V. Giri, the independent candidate rather than the official Congress party candidate Neelam Sanjiva Reddy for the vacant position of president of India. The other was the announcement by the prime minister of Bank nationalisation without consulting the finance minister, Morarji Desai. These steps culminated in party president S. Nijalingappa expelling her from the party for indiscipline. Gandhi, in turn, floated her own faction of the Congress party and managed to retain most of the Congress MPs on her side with only 65 on the side of the Congress (O) faction. The Gandhi faction, called Congress (R), lost its majority in the parliament but remained in power with the support of regional parties such as DMK. The policies of the Congress under Gandhi, before the 1971 elections, also included proposals for the abolition of the Privy Purse to former rulers of the princely states and the 1969 nationalization of the fourteen largest banks in India.
In 1967, a military conflict alongside the border of the Himalayan Kingdom of Sikkim, then an Indian protectorate, broke out between India and China. India emerged as the victor by successfully repelling Chinese attacks and forced the subsequent withdrawal of Chinese forces from the region.
Throughout the conflict, the Indian losses were 88 killed and 163 wounded while Chinese casualties stood at 340 killed and 450 wounded, according to the Indian Defense Ministry. Chinese sources made no declarations of casualties but alleged India to be the aggressor.
In December 1967, Indira Gandhi remarked these developments that "China continues to maintain an attitude of hostility towards us and spares no opportunity to malign us and to carry on anti-Indian propaganda not only against the Indian Government but the whole way of our democratic functioning."
In 1975, Gandhi incorporated Sikkim into India, after a referendum in which a majority of Sikkimese voted to join India. This move was condemned as a "despicable act of the Indian Government" by China. Chinese government mouthpiece China Daily wrote that "the Nehrus, father and daughter, had always acted in this way, and Indira Gandhi had gone further".
Garibi Hatao (Remove Poverty) was the resonant theme for Gandhi's 1971 political bid. The slogan was developed in response to the combined opposition alliance's use of the two-word manifesto—"Indira Hatao" (Remove Indira). The Garibi Hatao slogan and the proposed anti-poverty programs that came with it were designed to give Gandhi independent national support, based on the rural and urban poor. This would allow her to bypass the dominant rural castes both in and of state and local governments as well as the urban commercial class. For their part, the previously voiceless poor would at last gain both political worth and political weight. The programs created through Garibi Hatao, though carried out locally, were funded and developed by the Central Government in New Delhi. The program was supervised and staffed by the Indian National Congress party. "These programs also provided the central political leadership with new and vast patronage resources to be disbursed ... throughout the country."
The Congress government faced numerous problems during this term. Some of these were due to high inflation which in turn was caused by wartime expenses, drought in some parts of the country and, more importantly, the 1973 oil crisis. Opposition to her in the 1973–75 period, after the Gandhi wave had receded, was strongest in the states of Bihar and Gujarat. In Bihar, Jayaprakash Narayan, the veteran leader came out of retirement to lead the protest movement there.
Gandhi's biggest achievement following the 1971 election came in December 1971 with India's decisive victory over Pakistan in the Indo-Pakistani War. That victory occurred in the last two weeks of the Bangladesh Liberation War, which led to the formation of independent Bangladesh. An insurgency in East Pakistan (now Bangladesh) formed in early 1971, with Bengali's and East Pakistanis revolting against authoritarian rule from the central West Pakistan Government. In response, Pakistani security forces launched the infamous Operation Searchlight, in which Pakistan committed genocide among Bengali Hindus, nationalists and intelligentsia. Gandhi's India was initially restrained from intervening in the insurgency but quickly started to support Bengali rebels through the provision of military supplies. Indian forces clashed multiple times with Pakistani forces in the Eastern border. At one point, Indian forces along with Mukti Bahini rebels allied together and attacked Pakistani forces at Dhalai. The attack, supported and later successfully executed by India, was done to stop Pakistani cross-border shelling. The battle occurred more than a month before India's official intervention in December. Gandhi quickly dispatched more troops to the Eastern border with East Pakistan, hoping to support Mukti Bahini rebels and cease any Pakistani infiltration. Indian forces then clashed again with Pakistani forces after Indian forces crossed the border and secured Garibpur after a one day battle lasting from 20 November 1971 to the 21st. The next day, on 22 November, Indian and Pakistani aircraft engaged in a dogfight over the Boyra Salient, in which thousands of people watched as 4 Indian Folland Gnats shot down 2 Pakistani Canadair Sabres and damaged another. Both Pakistani pilots that were shot down were captured as prisoners of war. The Battle of Boyra instantly made the 4 Indian pilots celebrities and created large-scale nationalism as the Bangladesh Liberation War saw more and more Indian intervention and escalation. Other clashes also happened on the same day but did not receive as much media attention as did the battle of Boyra and Garibpur. On 3 December 1971, the Pakistan Air Force launched Operation Chengiz Khan, which saw Pakistani aircraft attacking Indian airbases and military installations across the Western border in a pre-emptive strike. The initial night-time attack by Pakistani forces was foiled, failing to inflict any major damage on Indian airbases, allowing Indian aircraft to counterattack into West Pakistan. Gandhi quickly declared a state of emergency and addressed the nation on radio shortly after midnight, stating: "We must be prepared for a long period of hardship and sacrifice."
Both countries mobilized for war and Gandhi ordered full-out war, ordering an invasion into East Pakistan. Pakistan's Navy had not improved since the 1965 war, while the Pakistani airforce could not launch attacks on the same scale as the Indian airforce. The Pakistan Army quickly attempted major land operations on the Western border, but most of these attacks besides some in Kashmir stalled, and allowed Indian counterattacks to gain land. The Pakistan Army lacked wide-scale organization which contributed to miscommunication and high casualties in the Western front.
In the Eastern Front of the war, Indian generals opted for a high speed lightning war, using mechanized and airborne units to quickly bypass Pakistani opposition and make quick strides towards the capital of East Pakistan, Dhaka. Jagjit Singh Aurora (who later became a critic of Gandhi in 1984) led Indian Army's Eastern Command. The Indian Air Force quickly overcame the small contingent of Pakistani aircraft in East Pakistan, allowing for air superiority over the region. Indian forces liberated Jessore and several other towns during the Battle of Sylhet between 7 December and 15 December 1971, which saw India conduct its first heliborne operation. India then conducted another airdrop on December 9, with Indian forces led by Major General Sagat Singh capturing just under 5,000 Pakistani POWs and also crossing the Meghna River towards Dhaka. Two days later, Indian forces conducted the largest airborne operation since World War Ii. 750 men of the Army's Parachute Regiment landed in Tangail and defeated the Pakistani forces in the area, securing a direct route to Dhaka. Little Pakistani forces escaped the battle with only 900 out of 7000 soldiers retreating back to Dhaka alive. By December 12, Indian forces had reached the outskirts of Dhaka and had prepared to besiege the capital. Indian heavy artillery arrived by the 14th, and shelled the city.
As surrender became apparent by 14 December 1971, Pakistani paramilitaries and militia roamed the streets of Dhaka during the night, kidnapping, torturing and then executing any educated Bengali who was viewed as someone who could lead Bangladesh once Pakistan surrendered. Over 200 of these people were killed on the 14th. By 16 December, Pakistani morale had reached a low point, with the Indian Army finally encircling Dhaka and besieging the city. On the 16th, Indian forces issued a 30-minute ultimatum for the city to surrender. Seeing that the city's defences paled in comparison to the Mukti Bahini and Indian forces outside the city, Lt-Gen. A.A.K. Niazi (Cdr. of Eastern Command) and his deputy, V-Adm. M.S. Khan surrendered the city without resistance. BBC News captured the moment of surrender as Indian soldiers from the Parachute Regiment streamed into the city. As Indian forces and Mukti Bahini rounded up the remaining Pakistani forces, Lieutenant General Jagjit Singh Aurora of India and A.A.K. Niazi of Pakistan signed the Pakistani Instrument of Surrender at 16:31Hrs IST on 16 December 1971. The surrender signified the collapse of the East Pakistan Government along with the end of the war. 93,000 soldiers of the Pakistani security forces surrendered, the largest surrender since World War II. The entire four-tiered military surrendered to India along with its officers and generals. Large crowds flooded the scenes as anti-Pakistani slogans emerged and Pakistani POWs were beaten by the locals. Eventually, Indian officers formed a human-chain to protect Pakistani POWs and Niazi from being lynched by the belligerent locals. Most of the 93,000 captured were Pakistan Army officers or paramilitary officers, along with 12,000 supporters (razakars). Hostilities officially ended on 17 December 1971. 8,000 Pakistani soldiers were killed along with 25,000 wounded; Indian forces suffered only 3,000 dead and 12,000 wounded. India claimed to have captured 3.6k square kilometres of Pakistani land on the Western Front while losing 126 square kilometres of land to Pakistan.
Gandhi was hailed as Goddess Durga by the people as well as the opposition leaders at the time when India defeated Pakistan in the war. In the elections held for State assemblies across India in March 1972, the Congress (R) swept to power in most states riding on the post-war "Indira wave".
On 12 June 1975, the Allahabad High Court declared Indira Gandhi's election to the Lok Sabha in 1971 void on the grounds of electoral malpractice. In an election petition filed by her 1971 opponent, Raj Narain (who later defeated her in the 1977 parliamentary election running in the Raebareli constituency), alleged several major as well as minor instances of the use of government resources for campaigning. Gandhi had asked one of her colleagues in government, Ashoke Kumar Sen, to defend her in court. She gave evidence in her defence during the trial. After almost four years, the court found her guilty of dishonest election practices, excessive election expenditure, and of using government machinery and officials for party purposes. The judge, however, rejected the more serious charges of bribery, laid against her in the case.
The court ordered her stripped of her parliamentary seat and banned her from running for any office for six years. As the constitution requires that the Prime Minister must be a member of either the Lok Sabha or the Rajya Sabha, the two houses of the Parliament of India, she was effectively removed from office. However, Gandhi rejected calls to resign. She announced plans to appeal to the Supreme Court and insisted that the conviction did not undermine her position. She said: "There is a lot of talk about our government not being clean, but from our experience the situation was very much worse when [opposition] parties were forming governments." And she dismissed criticism of the way her Congress Party raised election campaign money, saying all parties used the same methods. The prime minister retained the support of her party, which issued a statement backing her.
After news of the verdict spread, hundreds of supporters demonstrated outside her house, pledging their loyalty. Indian High Commissioner to the United Kingdom Braj Kumar Nehru said Gandhi's conviction would not harm her political career. "Mrs Gandhi has still today overwhelming support in the country," he said. "I believe the prime minister of India will continue in office until the electorate of India decides otherwise".
Gandhi moved to restore order by ordering the arrest of most of the opposition participating in the unrest. Her Cabinet and government recommended that then President Fakhruddin Ali Ahmed declare a state of emergency because of the disorder and lawlessness following the Allahabad High Court decision. Accordingly, Ahmed declared a State of Emergency caused by internal disorder, based on the provisions of Article 352(1) of the Constitution, on 25 June 1975. At the time of Emergency, There was a widespread rumour that Indira had ordered her search guards to eliminate firebrand trade unionist and socialist party leader George Fernandes, while he was on a run. Few International organisations and Government officials issued request letters to Indira Gandhi pleading her to relinquish such decrees. Fernandes had called a nationwide railway strike in 1974, that shut the railways for three weeks and became the largest industrial action in Asia. Indira had turned furious over him and the strike was massively cracked down.
Within a few months, President's rule was imposed on the two opposition party ruled states of Gujarat and Tamil Nadu thereby bringing the entire country under direct Central rule or by governments led by the ruling Congress party. Police were granted powers to impose curfews and detain citizens indefinitely; all publications were subjected to substantial censorship by the Ministry of Information and Broadcasting. Finally, the impending legislative assembly elections were postponed indefinitely, with all opposition-controlled state governments being removed by virtue of the constitutional provision allowing for a dismissal of a state government on the recommendation of the state's governor.
Indira Gandhi used the emergency provisions to change conflicting party members:
Unlike her father Jawaharlal Nehru, who preferred to deal with strong chief ministers in control of their legislative parties and state party organizations, Mrs. Gandhi set out to remove every Congress chief minister who had an independent base and to replace each of them with ministers personally loyal to her...Even so, stability could not be maintained in the states...
President Ahmed issued ordinances that did not require debate in the Parliament, allowing Gandhi to rule by decree.
The Emergency saw the entry of Gandhi's younger son, Sanjay Gandhi, into Indian politics. He wielded tremendous power during the emergency without holding any government office. According to Mark Tully, "His inexperience did not stop him from using the Draconian powers his mother, Indira Gandhi, had taken to terrorise the administration, setting up what was in effect a police state."
It was said that during the Emergency he virtually ran India along with his friends, especially Bansi Lal. It was also quipped that Sanjay Gandhi had total control over his mother and that the government was run by the PMH (Prime Minister House) rather than the PMO (Prime Minister Office).
In 1977, after extending the state of emergency twice, Gandhi called elections to give the electorate a chance to vindicate her rule. She may have grossly misjudged her popularity by reading what the heavily censored press wrote about her. She was opposed by the Janata alliance of Opposition parties. The alliance was made up of Bharatiya Jana Sangh, Congress (O), The Socialist parties, and Charan Singh's Bharatiya Kranti Dal representing northern peasants and farmers. The Janata alliance, with Jai Prakash Narayan as its spiritual guide, claimed the elections were the last chance for India to choose between "democracy and dictatorship". The Congress Party split during the election campaign of 1977: veteran Gandhi supporters like Jagjivan Ram, Hemvati Nandan Bahuguna and Nandini Satpathy were compelled to part ways and form a new political entity, the CFD (Congress for Democracy), due primarily to intra-party politicking and the circumstances created by Sanjay Gandhi. The prevailing rumour was that he intended to dislodge Gandhi, and the trio stood to prevent that. Gandhi's Congress party was soundly crushed in the elections. The Janata Party's democracy or dictatorship claim seemed to resonate with the public. Gandhi and Sanjay Gandhi lost their seats, and Congress was reduced to 153 seats (compared with 350 in the previous Lok Sabha), 92 of which were in the South. The Janata alliance, under the leadership of Morarji Desai, came to power after the State of Emergency was lifted. The alliance parties later merged to form the Janata Party under the guidance of Gandhian leader, Jayaprakash Narayan. The other leaders of the Janata Party were Charan Singh, Raj Narain, George Fernandes and Atal Bihari Vajpayee.
After the humiliating defeat in the election, the king of Nepal, through an intermediatory offered her and her family to shift to Nepal. She refeused to shift herself, but was open to move her two sons Sanjay Gandhi and Rajiv Gandhi. However, after consulting with Kao, she declined the offer altogether keeping in view of her future political career.
Since Gandhi had lost her seat in the election, the defeated Congress party appointed Yashwantrao Chavan as their parliamentary party leader. Soon afterwards, the Congress party split again with Gandhi floating her own Congress faction called Congress(I) where I stood for Indira. She won a by-election in the Chikmagalur Constituency and took a seat in the Lok Sabha in November 1978 after the Janata Party's attempts to have Kannada matinee idol Rajkumar run against her failed when he refused to contest the election saying he wanted to remain apolitical. However, the Janata government's home minister, Charan Singh, ordered her arrest along with Sanjay Gandhi on several charges, none of which would be easy to prove in an Indian court. The arrest meant that Gandhi was automatically expelled from Parliament. These allegations included that she "had planned or thought of killing all opposition leaders in jail during the Emergency". However, this strategy backfired disastrously. In response to her arrest, Gandhi's supporters hijacked an Indian Airlines jet and demanded her immediate release. Her arrest and long-running trial gained her sympathy from many people. The Janata coalition was only united by its hatred of Gandhi (or "that woman" as some called her). The party included right wing Hindu Nationalists, Socialists and former Congress party members. With so little in common, the Morarji Desai government was bogged down by infighting. In 1979, the government began to unravel over the issue of the dual loyalties of some members to Janata and the Rashtriya Swayamsevak Sangh (RSS)—the Hindu nationalist, paramilitary organisation. The ambitious Union finance minister, Charan Singh, who as the Union home minister during the previous year had ordered the Gandhi's' arrests, took advantage of this and started courting Indira and Sanjay. After a significant exodus from the party to Singh's faction, Desai resigned in July 1979. Singh was appointed prime minister, by President Reddy, after Gandhi and Sanjay Gandhi promised Singh that Congress (I) would support his government from outside on certain conditions. The conditions included dropping all charges against Gandhi and Sanjay. Since Singh refused to drop them, Congress (I) withdrew its support and President Reddy dissolved Parliament in August 1979.
Before the 1980 elections Gandhi approached the then Shahi Imam of Jama Masjid, Syed Abdullah Bukhari and entered into an agreement with him on the basis of 10-point programme to secure the support of the Muslim votes. In the elections held in January, Congress (I) under Indira's leadership returned to power with a landslide majority.
The Congress Party under Gandhi swept back into power in January 1980. In this election, Gandhi was elected by the voters of the Medak constituency. On 23 June, Sanjay was killed in a plane crash while performing an aerobatic manoeuvre in New Delhi. In 1980, as a tribute to her son's dream of launching an indigenously manufactured car, Gandhi nationalized Sanjay's debt-ridden company, Maruti Udyog, for Rs. 43,000,000 (4.34 crore) and invited joint venture bids from automobile companies around the world. Suzuki of Japan was selected as the partner. The company launched its first Indian-manufactured car in 1984.
By the time of Sanjay's death, Gandhi trusted only family members, and therefore persuaded her reluctant son, Rajiv, to enter politics.
Her PMO office staff included H. Y. Sharada Prasad as her information adviser and speechwriter.
Following the 1977 elections, a coalition led by the Sikh-majority Akali Dal came to power in the northern Indian state of Punjab. In an effort to split the Akali Dal and gain popular support among the Sikhs, Gandhi's Congress Party helped to bring the orthodox religious leader Jarnail Singh Bhindranwale to prominence in Punjab politics. Later, Bhindranwale's organisation, Damdami Taksal, became embroiled in violence with another religious sect called the Sant Nirankari Mission, and he was accused of instigating the murder of Jagat Narain, the owner of the Punjab Kesari newspaper. After being arrested over this matter, Bhindranwale disassociated himself from the Congress Party and joined Akali Dal. In July 1982, he led the campaign for the implementation of the Anandpur Resolution, which demanded greater autonomy for the Sikh-majority state. Meanwhile, a small group of Sikhs, including some of Bhindranwale's followers, turned to militancy after being targeted by government officials and police for supporting the Anandpur Resolution. In 1982, Bhindranwale and approximately 200 armed followers moved into a guest house called the Guru Nanak Niwas near the Golden Temple.
By 1983, the Temple complex had become a fort for many militants. The Statesman later reported that light machine guns and semi-automatic rifles were known to have been brought into the compound. On 23 April 1983, the Punjab Police Deputy Inspector General A. S. Atwal was shot dead as he left the Temple compound. The following day, Harchand Singh Longowal (then president of Akali Dal) confirmed the involvement of Bhindranwale in the murder.
After several futile negotiations, in June 1984, Gandhi ordered the Indian army to enter the Golden Temple to remove Bhindranwale and his supporters from the complex. The army used heavy artillery, including tanks, in the action code-named Operation Blue Star. The operation badly damaged or destroyed parts of the Temple complex, including the Akal Takht shrine and the Sikh library. It also led to the deaths of many Sikh fighters and innocent pilgrims. The number of casualties remains disputed with estimates ranging from many hundreds to many thousands.
Gandhi was accused of using the attack for political ends. Harjinder Singh Dilgeer stated that she attacked the temple complex to present herself as a great hero in order to win the general elections planned towards the end of 1984. There was fierce criticism of the action by Sikhs in India and overseas. There were also incidents of mutiny by Sikh soldiers in the aftermath of the attack.
"I am alive today, I may not be there tomorrow ... I shall continue to serve until my last breath and when I die, I can say, that every drop of my blood will invigorate India and strengthen it ... Even if I died in the service of the nation, I would be proud of it. Every drop of my blood ... will contribute to the growth of this nation and to make it strong and dynamic."
—Gandhi's remarks on her last speech a day before her death (30 October 1984) at the then Parade Ground, Odisha.
On 31 October 1984, two of Gandhi's Sikh bodyguards, Satwant Singh and Beant Singh, shot her with their service weapons in the garden of the prime minister's residence at 1 Safdarjung Road, New Delhi, allegedly in revenge for Operation Blue Star. The shooting occurred as she was walking past a wicket gate guarded by the two men. She was to be interviewed by the British filmmaker Peter Ustinov, who was filming a documentary for Irish television. Beant shot her three times using his side-arm; Satwant fired 30 rounds. The men dropped their weapons and surrendered. Afterwards, they were taken away by other guards into a closed room where Beant was shot dead. Kehar Singh was later arrested for conspiracy in the attack. Both Satwant and Kehar were sentenced to death and hanged in Delhi's Tihar Jail.
Gandhi was taken to the All India Institutes of Medical Sciences at 9:30 AM where doctors operated on her. She was declared dead at 2:20 PM. The post-mortem examination was conducted by a team of doctors headed by Tirath Das Dogra. Dogra stated that Gandhi had sustained as many as 30 bullet wounds, from two sources: a Sten submachine gun and a .38 Special revolver. The assailants had fired 31 bullets at her, of which 30 hit her; 23 had passed through her body while seven remained inside her. Dogra extracted bullets to establish the make of the weapons used and to match each weapon with the bullets recovered by ballistic examination. The bullets were matched with their respective weapons at the Central Forensic Science Laboratory (CFSL) Delhi. Subsequently, Dogra appeared in Shri Mahesh Chandra's court as an expert witness (PW-5); his testimony took several sessions. The cross examination was conducted by Shri Pran Nath Lekhi, the defence counsel. Salma Sultan provided the first news of her assassination on Doordarshan's evening news on 31 October 1984, more than 10 hours after she was shot.
Gandhi was cremated in accordance with Hindu tradition on 3 November near Raj Ghat. The site where she was cremated is known today as Shakti Sthal. In order to pay homage, Gandhi's body lay in state at Teen Murti House. Thousands of followers strained for a glimpse of the cremation. Her funeral was televised live on domestic and international stations, including the BBC. After her death, the Parade Ground was converted to the Indira Gandhi Park which was inaugurated by her son, Rajiv Gandhi.
Gandhi's assassination dramatically changed the political landscape. Rajiv succeeded his mother as Prime Minister within hours of her murder and anti-Sikh riots erupted, lasting for several days and killing more than 3,000 Sikhs in New Delhi and an estimated 8,000 across India. Many Congress leaders were believed to be behind the anti-Sikh massacre.
Gandhi's death was mourned worldwide. World leaders condemned the assassination and said her death would leave a 'big emptiness' in international affairs. In Moscow, Soviet President Konstantin Chernenko sent condolences stating, "The Soviet people learned with pain and sorrow about the untimely death in a villainous assassination of the glorious daughter of the great Indian people, a fiery fighter for peace and security of peoples and a great friend of the Soviet Union". President Ronald Reagan, along with Secretary of State George Shultz, visited the Indian Embassy to sign a book of condolences and expressed his 'shock, revulsion, and grief' over the assassination. 42nd vice president of the United States Walter Mondale called Gandhi 'a great leader of a great democracy' and deplored 'this shocking act of violence'. Asian, African and European leaders mourned Gandhi as a great champion of democracy and leader of the Non-Aligned Movement expressed its 'deepest grief' and called the killing a 'terrorist' act. South Korean President Chun Doo-hwan, said Gandhi's death meant the 'loss of a great leader to the whole world.' Yugoslav President Veselin Đuranović, Pakistani President Mohammad Zia ul-Haq, Italian President Sandro Pertini, Pope John Paul II at the Vatican, French President Francois Mitterrand condemned the killing. At the United Nations, the General Assembly paused in its work as shocked delegates mourned the death. Assembly President Paul Lusaka of Zambia postponed a scheduled debate and hastily organized a memorial meeting.
Gandhi is remembered for her ability to effectively promote Indian foreign policy measures.
In early 1971, disputed elections in Pakistan led then East Pakistan to declare independence as Bangladesh. Repression and violence by the Pakistani army led to 10 million refugees crossing the border into India over the following months. Finally, in December 1971, Gandhi intervened directly in the conflict to liberate Bangladesh. India emerged victorious following the war with Pakistan to become the dominant power of South Asia. India had signed a treaty with the Soviet Union promising mutual assistance in the case of war, while Pakistan received active support from the United States during the conflict. U.S. President Richard Nixon disliked Gandhi personally, referring to her as a "bitch" and a "clever fox" in his private communication with Secretary of State Henry Kissinger. Nixon later wrote of the war: "[Gandhi] suckered [America]. Suckered us ... this woman suckered us." Relations with the U.S. became distant as Gandhi developed closer ties with the Soviet Union after the war. The latter grew to become India's largest trading partner and its biggest arms supplier for much of Gandhi's premiership. India's new hegemonic position, as articulated under the "Indira Doctrine", led to attempts to bring the Himalayan states under India's sphere of influence. Nepal and Bhutan remained aligned with India, while in 1975, after years of campaigning, Sikkim voted to join India in a referendum.
India maintained close ties with neighbouring Bangladesh (formerly East Pakistan) following the Liberation War. Prime Minister Sheikh Mujibur Rahman recognised Gandhi's contributions to the independence of Bangladesh. However, Mujibur Rahman's pro-India policies antagonised many in Bangladeshi politics and the military, which feared that Bangladesh had become a client state of India. The Assassination of Mujibur Rahman in 1975 led to the establishment of Islamist military regimes that sought to distance the country from India. Gandhi's relationship with the military regimes was strained because of her alleged support of anti-Islamist leftist guerrilla forces in Bangladesh. Generally, however, there was a rapprochement between Gandhi and the Bangladeshi regimes, although issues such as border disputes and the Farakka Dam remained an irritant to bilateral ties. In 2011, the Government of Bangladesh conferred its highest state award for non-nationals, the Bangladesh Freedom Honour posthumously on Gandhi for her "outstanding contribution" to the country's independence.
Gandhi's approach to dealing with Sri Lanka's ethnic problems was initially accommodating. She enjoyed cordial relations with Prime Minister Sirimavo Bandaranaike. In 1974, India ceded the tiny islet of Katchatheevu to Sri Lanka to save Bandaranaike's socialist government from a political disaster. However, relations soured over Sri Lanka's movement away from socialism under J. R. Jayewardene, whom Gandhi despised as a "western puppet". India under Gandhi was alleged to have supported the Liberation Tigers of Tamil Eelam (LTTE) militants in the 1980s to put pressure on Jayewardene to abide by Indian interests. Nevertheless, Gandhi rejected demands to invade Sri Lanka in the aftermath of Black July 1983, an anti-Tamil pogrom carried out by Sinhalese mobs. Gandhi made a statement emphasising that she stood for the territorial integrity of Sri Lanka, although she also stated that India cannot "remain a silent spectator to any injustice done to the Tamil community."
India's relationship with Pakistan remained strained following the Shimla Accord in 1972. Gandhi's authorisation of the detonation of a nuclear device at Pokhran in 1974 was viewed by Pakistani leader Zulfikar Ali Bhutto as an attempt to intimidate Pakistan into accepting India's hegemony in the subcontinent. However, in May 1976, Gandhi and Bhutto both agreed to reopen diplomatic establishments and normalise relations. Following the rise to power of General Muhammad Zia-ul-Haq in Pakistan in 1978, India's relations with its neighbour reached a nadir. Gandhi accused General Zia of supporting Khalistani militants in Punjab. Military hostilities recommenced in 1984 following Gandhi's authorisation of Operation Meghdoot. India was victorious in the resulting Siachen conflict against Pakistan.
In order to keep the Soviet Union and the United States out of South Asia, Gandhi was instrumental in establishing the South Asian Association for Regional Cooperation (SAARC) in 1983
Gandhi remained a staunch supporter of the Palestinians in the Arab–Israeli conflict and was critical of the Middle East diplomacy sponsored by the United States. Israel was viewed as a religious state, and thus an analogue to India's archrival Pakistan. Indian diplomats hoped to win Arab support in countering Pakistan in Kashmir. Nevertheless, Gandhi authorised the development of a secret channel of contact and security assistance with Israel in the late 1960s. Her lieutenant, P. V. Narasimha Rao, later became prime minister and approved full diplomatic ties with Israel in 1992.
India's pro-Arab policy had mixed success. Establishment of close ties with the socialist and secular Baathist regimes to some extent neutralised Pakistani propaganda against India. However, the Indo-Pakistani War of 1971 presented a dilemma for the Arab and Muslim states of the Middle East as the war was fought by two states both friendly to the Arabs. The progressive Arab regimes in Egypt, Syria, and Algeria chose to remain neutral, while the conservative pro-American Arab monarchies in Jordan, Saudi Arabia, Kuwait, and United Arab Emirates openly supported Pakistan. Egypt's stance was met with dismay by the Indians, who had come to expect close co-operation with the Baathist regimes. But, the death of Nasser in 1970 and Sadat's growing friendship with Riyadh, and his mounting differences with Moscow, constrained Egypt to a policy of neutrality. Gandhi's overtures to Muammar Gaddafi were rebuffed. Libya agreed with the Arab monarchies in believing that Gandhi's intervention in East Pakistan was an attack against Islam.
The 1971 war became a temporary stumbling block in growing Indo-Iranian ties. Although Iran had earlier characterized the Indo-Pakistani war in 1965 as Indian aggression, the Shah had launched an effort at rapprochement with India in 1969 as part of his effort to secure support for a larger Iranian role in the Persian Gulf. Gandhi's tilt towards Moscow and her dismemberment of Pakistan was perceived by the Shah as part of a larger anti-Iran conspiracy involving India, Iraq, and the Soviet Union. Nevertheless, Iran had resisted Pakistani pressure to activate the Baghdad Pact and draw the Central Treaty Organisation (CENTO) into the conflict. Gradually, Indian and Iranian disillusionment with their respective regional allies led to a renewed partnership between the nations. Gandhi was unhappy with the lack of support from India's Arab allies during the war with Pakistan, while the Shah was apprehensive at the growing friendship between Pakistan and Arab states of the Persian Gulf, especially Saudi Arabia, and the growing influence of Islam in Pakistani society. There was an increase in Indian economic and military co-operation with Iran during the 1970s. The 1974 India-Iranian agreement led to Iran supplying nearly 75 percent of India's crude oil demands. Gandhi appreciated the Shah's disregard of Pan-Islamism in diplomacy.
One of the major developments in Southeast Asia during Gandhi's premiership was the formation of the Association of Southeast Asian Nations (ASEAN) in 1967. Relations between ASEAN and India were mutually antagonistic. India perceived ASEAN to be linked to the Southeast Asia Treaty Organization (SEATO) and, therefore, it was seen as a pro-American organisation. On their part, the ASEAN nations were unhappy with Gandhi's sympathy for the Viet Cong and India's strong links with the USSR. Furthermore, they were also apprehensions in the region about Gandhi's plans, particularly after India played a big role in breaking up Pakistan and facilitating the emergence of Bangladesh as a sovereign country in 1971. India's entry into the nuclear weapons club in 1974 also contributed to tensions in Southeast Asia. Relations only began to improve following Gandhi's endorsement of the ZOPFAN declaration and the disintegration of the SEATO alliance in the aftermath of Pakistani and American defeats in the region. Nevertheless, Gandhi's close relations with reunified Vietnam and her decision to recognize the Vietnam-installed Government of Cambodia in 1980 meant that India and ASEAN were unable to develop a viable partnership.
On 26 September 1981, Gandhi was conferred with the honorary degree of Doctor at the Laucala Graduation at the University of the South Pacific in Fiji.
Although independent India was initially viewed as a champion of various African independence movements, its cordial relationship with the Commonwealth of Nations and its liberal views of British policies in East Africa had harmed its image as a staunch supporter of various independence movements in the third world. Indian condemnation of militant struggles in Kenya and Algeria was in sharp contrast to China, who had supported armed struggle to win African independence. After reaching a high diplomatic point in the aftermath of Nehru's role in the Suez Crisis, India's isolation from Africa was almost complete when only four nations—Ethiopia, Kenya, Nigeria, and Libya—supported her during the Sino-Indian War in 1962. After Gandhi became prime minister, diplomatic and economic relations with the states which had sided with India during the Sino-Indian War were expanded. Gandhi began negotiations with the Kenyan government to establish the Africa-India Development Cooperation. The Indian government also started considering the possibility of bringing Indians settled in Africa within the framework of its policy goals to help recover its declining geo-strategic influence. Gandhi declared the people of Indian origin settled in Africa as "Ambassadors of India". Efforts to rope in the Asian community to join Indian diplomacy, however, came to naught, in part because of the unwillingness of Indians to remain in politically insecure surroundings, and because of the exodus of African Indians to Britain with the passing of the Commonwealth Immigrants Act in 1968. In Uganda, the African Indian community suffered persecution and eventually expulsion under the government of Idi Amin.
Foreign and domestic policy successes in the 1970s enabled Gandhi to rebuild India's image in the eyes of African states. Victory over Pakistan and India's possession of nuclear weapons showed the degree of India's progress. Furthermore, the conclusion of the Indo-Soviet treaty in 1971, and threatening gestures by the United States, to send its nuclear-armed Task Force 74 into the Bay of Bengal at the height of the East Pakistan crisis had enabled India to regain its anti-imperialist image. Gandhi firmly tied Indian anti-imperialist interests in Africa to those of the Soviet Union. Unlike Nehru, she openly and enthusiastically supported liberation struggles in Africa. At the same time, Chinese influence in Africa had declined owing to its incessant quarrels with the Soviet Union. These developments permanently halted India's decline in Africa and helped to reestablish its geo-strategic presence.
The Commonwealth is a voluntary association of mainly former British colonies. India maintained cordial relations with most of the members during Gandhi's time in power. In the 1980s, she, along with Canadian prime minister Pierre Trudeau, Zambia's president Kenneth Kaunda, Australian prime minister Malcolm Fraser and Singapore prime minister Lee Kuan Yew was regarded as one of the pillars of the Commonwealth. India under Gandhi also hosted the 1983 Commonwealth Heads of Government summit in New Delhi. Gandhi used these meetings as a forum to put pressure on member countries to cut economic, sports, and cultural ties with apartheid South Africa.
In the early 1980s under Gandhi, India attempted to reassert its prominent role in the Non-Aligned Movement by focusing on the relationship between disarmament and economic development. By appealing to the economic grievances of developing countries, Gandhi and her successors exercised a moderating influence on the Non-aligned movement, diverting it from some of the Cold War issues that marred the controversial 1979 Havana meeting where Cuban leader Fidel Castro attempted to steer the movement towards the Soviet Union. Although hosting the 1983 summit at Delhi boosted Indian prestige within the movement, its close relations with the Soviet Union and its pro-Soviet positions on Afghanistan and Cambodia limited its influence.
Gandhi spent a number of years in Europe during her youth and had formed many friendships there. During her premiership she formed friendships with many leaders such as West German chancellor, Willy Brandt and Austrian chancellor Bruno Kreisky. She also enjoyed a close working relationship with many British leaders including conservative premiers, Edward Heath and Margaret Thatcher.
The relationship between India and the Soviet Union deepened during Gandhi's rule. The main reason was the perceived bias of the United States and China, rivals of the USSR, towards Pakistan. The support of the Soviets with arms supplies and the casting of a veto at the United Nations helped in winning and consolidating the victory over Pakistan in the 1971 Bangladesh liberation war. Before the war, Gandhi signed a treaty of friendship with the Soviets. They were unhappy with the 1974 nuclear test conducted by India but did not support further action because of the ensuing Cold War with the United States. Gandhi was unhappy with the Soviet invasion of Afghanistan, but once again calculations involving relations with Pakistan and China kept her from criticising the Soviet Union harshly. The Soviets became the main arms supplier during the Gandhi years by offering cheap credit and transactions in rupees rather than in dollars. The easy trade deals also applied to non-military goods. Under Gandhi, by the early 1980s, the Soviets had become India's largest trading partner.
Soviet intelligence was involved in India during Indira Gandhi's administration, sometimes at Gandhi's expense. In the prelude to Operation Blue Star, by 1981, the Soviets had launched Operation Kontakt, which was based on a forged document purporting to contain details of the weapons and money provided by the ISI to Sikh militants who wanted to create an independent country. In November 1982, Yuri Andropov, the General Secretary of the Communist Party and leader of the Soviet Union, approved a proposal to fabricate Pakistani intelligence documents detailing ISI plans to foment religious disturbances in Punjab and promote the creation of Khalistan as an independent Sikh state. Indira Gandhi's decision to move troops into the Punjab was based on her taking seriously the information provided by the Soviets regarding secret CIA support for the Sikhs.
According to the Mitrokhin Archive, the Soviets used a new recruit in the New Delhi residency named "Agent S" who was close to Indira Gandhi as a major channel for providing her disinformation. Agent S provided Indira Gandhi with false documents purporting to show Pakistani involvement in the Khalistan conspiracy. The KGB became confident that it could continue to deceive Indira Gandhi indefinitely with fabricated reports of CIA and Pakistani conspiracies against her. The Soviets persuaded Rajiv Gandhi during a visit to Moscow in 1983 that the CIA was engaged in subversion in the Punjab. When Rajiv Gandhi returned to India, he declared this to be true. The KGB was responsible for Indira Gandhi exaggerating the threats posed by both the CIA and Pakistan. This KGB role in facilitating Operation Bluestar was acknowledged by Subramanian Swamy who stated in 1992 "The 1984 Operation Bluestar became necessary because of the vast disinformation against Sant Bhindranwale by the KGB, and repeated inside Parliament by the Congress Party of India."
A report following the Mitrokhin archive also caused some historiographical controversy about Indira Gandhi. In India, a senior leader of the Bharatiya Janata Party, L. K. Advani, requested of the Government a white paper on the role of foreign intelligence agencies and a judicial enquiry on the allegations. The spokesperson of the Indian Congress party referred to the book as "pure sensationalism not even remotely based on facts or records" and pointed out that the book is not based on official records from the Soviet Union. L.K Advani raised his voice because in this book is written about ex-prime minister Indira Gandhi (Codenamed VANO) relations with KGB. KGB's direct link to Prime Minister of India, Indira Gandhi (code-named Vano) was alleged. "Suitcases full of banknotes were said to be routinely taken to the Prime Minister's house. Former Syndicate member S. K. Patil is reported to have said that Mrs. Gandhi did not even return the suitcases". An extensive footprint in the Indian media was also described- "According to KGB files, by 1973 it had ten Indian newspapers on its payroll (which cannot be identified for legal reasons) as well as a press agency under its control. During 1972 the KGB claimed to have planted 3,789 articles in Indian newspapers – probably more than in any other country in the non-Communist world." According to its files, the number fell to 2,760 in 1973 but rose to 4,486 in 1974 and 5,510 in 1975. Mitrokhin estimated that in some major NATO countries, despite active-measures campaigns, the KGB was able to plant little more than 1 per cent of the articles which it placed in the Indian press."
When Gandhi came to power in 1966, Lyndon Johnson was the US president. At the time, India was reliant on the US for food aid. Gandhi resented the US policy of food aid being used as a tool to force India to adopt policies favoured by the US. She also resolutely refused to sign the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Relations with the US were strained badly under President Richard Nixon and his favouring of Pakistan during the Bangladesh liberation war. Nixon despised Gandhi politically and personally. In 1981, Gandhi met President Ronald Reagan for the first time at the North–South Summit held to discuss global poverty. She had been described to him as an 'Ogre', but he found her charming and easy to work with and they formed a close working relationship during her premiership in the 1980s.
Gandhi presided over three Five-Year Plans as prime minister, two of which succeeded in meeting their targeted growth.
There is considerable debate whether Gandhi was a socialist on principle or out of political expediency. Sunanda K. Datta-Ray described her as "a master of rhetoric ... often more posture than policy", while The Times journalist, Peter Hazelhurst, famously quipped that Gandhi's socialism was "slightly left of self-interest." Critics have focused on the contradictions in the evolution of her stance towards communism. Gandhi was known for her anti-communist stance in the 1950s, with Meghnad Desai even describing her as "the scourge of [India's] Communist Party." Yet, she later forged close relations with Indian communists even while using the army to break the Naxalites. In this context, Gandhi was accused of formulating populist policies to suit her political needs. She was seemingly against the rich and big business while preserving the status quo to manipulate the support of the left in times of political insecurity, such as the late 1960s. Although in time Gandhi came to be viewed as the scourge of the right-wing and reactionary political elements of India, leftist opposition to her policies emerged. As early as 1969, critics had begun accusing her of insincerity and machiavellianism. The Indian Libertarian wrote that: "it would be difficult to find a more machiavellian leftist than Mrs Indira Gandhi ... for here is Machiavelli at its best in the person of a suave, charming and astute politician." J. Barkley Rosser Jr. wrote that "some have even seen the declaration of emergency rule in 1975 as a move to suppress [leftist] dissent against Gandhi's policy shift to the right." In the 1980s, Gandhi was accused of "betraying socialism" after the beginning of Operation Forward, an attempt at economic reform. Nevertheless, others were more convinced of Gandhi's sincerity and devotion to socialism. Pankaj Vohra noted that "even the late prime minister's critics would concede that the maximum number of legislations of social significance was brought about during her tenure ... [and that] she lives in the hearts of millions of Indians who shared her concern for the poor and weaker sections and who supported her politics."
In summarising the biographical works on Gandhi, Blema S. Steinberg concludes she was decidedly non-ideological. Only 7.4% (24) of the total 330 biographical extractions posit ideology as a reason for her policy choices. Steinberg notes Gandhi's association with socialism was superficial. She had only a general and traditional commitment to the ideology by way of her political and family ties. Gandhi personally had a fuzzy concept of socialism. In one of the early interviews she gave as prime minister, Gandhi had ruminated: "I suppose you could call me a socialist, but you have understand what we mean by that term ... we used the word [socialism] because it came closest to what we wanted to do here – which is to eradicate poverty. You can call it socialism; but if by using that word we arouse controversy, I don't see why we should use it. I don't believe in words at all." Regardless of the debate over her ideology or lack thereof, Gandhi remains a left-wing icon. She has been described by Hindustan Times columnist, Pankaj Vohra, as "arguably the greatest mass leader of the last century." Her campaign slogan, Garibi Hatao ('Remove Poverty'), has become an often used motto of the Indian National Congress Party. To the rural and urban poor, untouchables, minorities and women in India, Gandhi was "Indira Amma or Mother Indira."
Gandhi inherited a weak and troubled economy. Fiscal problems associated with the war with Pakistan in 1965, along with a drought-induced food crisis that spawned famines, had plunged India into the sharpest recession since independence. The government responded by taking steps to liberalise the economy and agreeing to the devaluation of the currency in return for the restoration of foreign aid. The economy managed to recover in 1966 and ended up growing at 4.1% over 1966–1969. Much of that growth, however, was offset by the fact that the external aid promised by the United States government and the International Bank for Reconstruction and Development (IBRD), meant to ease the short-run costs of adjustment to a liberalised economy, never materialised. American policy makers had complained of continued restrictions imposed on the economy. At the same time, Indo-US relations were strained because of Gandhi's criticism of the American bombing campaign in Vietnam. While it was thought at the time, and for decades after, that President Johnson's policy of withholding food grain shipments was to coerce Indian support for the war, in fact, it was to offer India rainmaking technology that he wanted to use as a counterweight to China's possession of the atomic bomb. In light of the circumstances, liberalisation became politically suspect and was soon abandoned. Grain diplomacy and currency devaluation became matters of intense national pride in India. After the bitter experience with Johnson, Gandhi decided not to request food aid in the future. Moreover, her government resolved never again to become "so vulnerably dependent" on aid, and painstakingly began building up substantial foreign exchange reserves. When food stocks slumped after poor harvests in 1972, the government made it a point to use foreign exchange to buy US wheat commercially rather than seek resumption of food aid.
The period of 1967–75 was characterised by socialist ascendency in India, which culminated in 1976 with the official declaration of state socialism. Gandhi not only abandoned the short-lived liberalisation programme but also aggressively expanded the public sector with new licensing requirements and other restrictions for industry. She began a new course by launching the Fourth Five-Year Plan in 1969. The government targeted growth at 5.7% while stating as its goals, "growth with stability and progressive achievement of self-reliance." The rationale behind the overall plan was Gandhi's Ten-Point Programme of 1967. This had been her first economic policy formulation, six months after coming to office. The programme emphasised greater state control of the economy with the understanding that government control assured greater welfare than private control. Related to this point were a set of policies that were meant to regulate the private sector. By the end of the 1960s, the reversal of the liberalisation process was complete, and India's policies were characterised as "protectionist as ever."
To deal with India's food problems, Gandhi expanded the emphasis on production of inputs to agriculture that had already been initiated by her father, Jawaharlal Nehru. The Green Revolution in India subsequently culminated under her government in the 1970s. It transformed the country from a nation heavily reliant on imported grains, and prone to famine, to one largely able to feed itself, and becoming successful in achieving its goal of food security. Gandhi had a personal motive in pursuing agricultural self-sufficiency, having found India's dependency on the U.S. for shipments of grains humiliating.
The economic period of 1967–75 became significant for its major wave of nationalisation amidst increased regulation of the private sector.
Some other objectives of the economic plan for the period were to provide for the minimum needs of the community through a rural works program and the removal of the privy purses of the nobility. Both these, and many other goals of the 1967 programme, were accomplished by 1974–75. Nevertheless, the success of the overall economic plan was tempered by the fact that annual growth at 3.3–3.4% over 1969–74 fell short of the targeted figure.
The Fifth Five-Year Plan (1974–79) was enacted against the backdrop of the state of emergency and the Twenty Point Program of 1975. It was the economic rationale of the emergency, a political act that has often been justified on economic grounds. In contrast to the reception of Gandhi's earlier economic plan, this one was criticised for being a "hastily thrown together wish list." Gandhi promised to reduce poverty by targeting the consumption levels of the poor and enact wide-ranging social and economic reforms. In addition, the government targeted an annual growth rate of 4.4% over the period of the plan.
The measures of the emergency regime was able to halt the economic trouble of the early to mid-1970s, which had been marred by harvest failures, fiscal contraction, and the breakdown of the Bretton Woods system of fixed exchanged rates. The resulting turbulence in the foreign exchange markets was accentuated further by the oil shock of 1973. The government was able to exceed the targeted growth figure with an annual growth rate of 5.0–5.2% over the five-year period of the plan (1974–79). The economy grew at the rate of 9% in 1975–76 alone, and the Fifth Plan, became the first plan during which the per capita income of the economy grew by over 5%.
Gandhi inherited a weak economy when she became prime minister again in 1980. The preceding year—1979–80—under the Janata Party government saw the strongest recession (−5.2%) in the history of modern India with inflation rampant at 18.2%. Gandhi proceeded to abrogate the Janata Party government's Five-Year Plan in 1980 and launched the Sixth Five-Year Plan (1980–85). Her government targeted an average growth rate of 5.2% over the period of the plan. Measures to check inflation were also taken; by the early 1980s it was under control at an annual rate of about 5%.
Although Gandhi continued professing socialist beliefs, the Sixth Five-Year Plan was markedly different from the years of Garibi Hatao. Populist programmes and policies were replaced by pragmatism. There was an emphasis on tightening public expenditures, greater efficiency of the state-owned enterprises (SOE), which Gandhi qualified as a "sad thing", and on stimulating the private sector through deregulation and liberation of the capital market. The government subsequently launched Operation Forward in 1982, the first cautious attempt at reform. The Sixth Plan went on to become the most successful of the Five-Year Plans yet; showing an average growth rate of 5.7% over 1980–85.
During Lal Bahadur Shastri's last full year in office (1965), inflation averaged 7.7%, compared to 5.2% at the end of Gandhi's first term in office (1977). On average, inflation in India had remained below 7% through the 1950s and 1960s. It then accelerated sharply in the 1970s, from 5.5% in 1970–71 to over 20% by 1973–74, due to the international oil crisis. Gandhi declared inflation the gravest of problems in 1974 (at 25.2%) and devised a severe anti-inflation program. The government was successful in bringing down inflation during the emergency; achieving negative figures of −1.1% by the end of 1975–76.
Gandhi inherited a tattered economy in her second term; harvest failures and a second oil shock in the late 1970s had caused inflation to rise again. During Charan Singh's short time in office in the second half of 1979, inflation averaged 18.2%, compared to 6.5% during Gandhi's last year in office (1984). General economic recovery under Gandhi led to an average inflation rate of 6.5% from 1981–82 to 1985–86—the lowest since the beginning of India's inflation problems in the 1960s.
The unemployment rate remained constant at 9% over a nine-year period (1971–80) before declining to 8.3% in 1983.
Despite the provisions, control and regulations of the Reserve Bank of India, most banks in India had continued to be owned and operated by private persons. Businessmen who owned the banks were often accused of channeling the deposits into their own companies and ignoring priority sector lending. Furthermore, there was a great resentment against class banking in India, which had left the poor (the majority of the population) unbanked. After becoming prime minister, Gandhi expressed her intention of nationalising the banks to alleviate poverty in a paper titled, "Stray thoughts on Bank Nationalisation". The paper received overwhelming public support. In 1969, Gandhi moved to nationalise fourteen major commercial banks. After this, public sector bank branch deposits increased by approximately 800 percent; advances took a huge jump by 11,000 percent. Nationalisation also resulted in significant growth in the geographic coverage of banks; the number of bank branches rose from 8,200 to over 62,000, most of which were opened in unbanked, rural areas. The nationalisation drive not only helped to increase household savings, but it also provided considerable investments in the informal sector, in small- and medium-sized enterprises, and in agriculture, and contributed significantly to regional development and to the expansion of India's industrial and agricultural base. Jayaprakash Narayan, who became famous for leading the opposition to Gandhi in the 1970s, solidly praised her nationalisation of banks.
Having been re-elected in 1971 on a nationalisation platform, Gandhi proceeded to nationalise the coal, steel, copper, refining, cotton textiles, and insurance industries. Most of this was done to protect employment and the interests of organised labour. The remaining private sector industries were placed under strict regulatory control.
During the Indo-Pakistani War of 1971, foreign-owned private oil companies had refused to supply fuel to the Indian Navy and the Indian Air Force. In response, Gandhi nationalised some oil companies in 1973. However, major nationalisations also occurred in 1974 and 1976, forming the oil majors. After nationalisation, the oil majors such as the Indian Oil Corporation (IOC), the Hindustan Petroleum Corporation (HPCL) and the Bharat Petroleum Corporation (BPCL) had to keep a minimum stock level of oil, to be supplied to the military when needed.
In 1966, Gandhi accepted the demands of the Akalis to reorganise Punjab on linguistic lines. The Hindi-speaking southern half of Punjab became a separate state, Haryana, while the Pahari speaking hilly areas in the northeast were joined to Himachal Pradesh. By doing this she had hoped to ward off the growing political conflict between Hindu and Sikh groups in the region. However, a contentious issue that was considered unresolved by the Akalis was the status of Chandigarh, a prosperous city on the Punjab-Haryana border, which Gandhi declared a union territory to be shared as a capital by both the states.
Victory over Pakistan in 1971 consolidated Indian power in Kashmir. Gandhi indicated that she would make no major concessions on Kashmir. The most prominent of the Kashmiri separatists, Sheikh Abdullah, had to recognise India's control over Kashmir in light of the new order in South Asia. The situation was normalised in the years following the war after Abdullah agreed to an accord with Gandhi, by giving up the demand for a plebiscite in return for a special autonomous status for Kashmir. In 1975, Gandhi declared the state of Jammu and Kashmir as a constituent unit of India. The Kashmir conflict remained largely peaceful if frozen under Gandhi's premiership.
In 1972, Gandhi granted statehood to Meghalaya, Manipur and Tripura, while the North-East Frontier Agency was declared a union territory and renamed Arunachal Pradesh. The transition to statehood for these territories was successfully overseen by her administration. This was followed by the annexation of Sikkim in 1975.
The principle of equal pay for equal work for both men and women was enshrined in the Indian Constitution under the Gandhi administration.
Gandhi questioned the continued existence of a privy purse for former rulers of princely states. She argued the case for abolition based on equal rights for all citizens and the need to reduce the government's revenue deficit. The nobility responded by rallying around the Jana Sangh and other right-wing parties that stood in opposition to Gandhi's attempts to abolish royal privileges. The motion to abolish privy purses, and the official recognition of the titles, was originally brought before the Parliament in 1970. It was passed in the Lok Sabha but fell short of the two-thirds majority in the Rajya Sabha by a single vote. Gandhi responded by having a Presidential proclamation issued; de-recognising the princes; with this withdrawal of recognition, their claims to privy purses were also legally lost. However, the proclamation was struck down by the Supreme Court of India. In 1971, Gandhi again motioned to abolish the privy purse. This time, it was passed successfully as the 26th Amendment to the Constitution of India.
Gandhi claimed that only "clear vision, iron will and the strictest discipline" can remove poverty. She justified the imposition of the state of emergency in 1975 in the name of the socialist mission of the Congress. Armed with the power to rule by decree and without constitutional constraints, Gandhi embarked on a massive redistribution program. The provisions included rapid enforcement of land ceilings, housing for landless labourers, the abolition of bonded labour and a moratorium on the debts of the poor. North India was at the centre of the reforms. Millions of hectares of land were acquired and redistributed. The government was also successful in procuring houses for landless labourers; According to Francine Frankel, three-fourths of the targeted four million houses was achieved in 1975 alone. Nevertheless, others have disputed the success of the program and criticised Gandhi for not doing enough to reform land ownership. The political economist, Jyotindra Das Gupta, cryptically questioned "...whether or not the real supporters of land-holders were in jail or in power?" Critics also accused Gandhi of choosing to "talk left and act right", referring to her concurrent pro-business decisions and endeavours. J. Barkley Rosser Jr. wrote that "some have even seen the declaration of emergency rule in 1975 as a move to suppress dissent against Gandhi's policy shift to the right." Regardless of the controversy over the nature of the reforms, the long-term effects of the social changes gave rise to the prominence of middle-ranking farmers from intermediate and lower castes in North India. The rise of these newly empowered social classes challenged the political establishment of the Hindi Belt in the years to come.
Under the 1950 Constitution of India, Hindi was to have become the official national language by 1965. This was unacceptable to many non-Hindi-speaking states, which wanted the continued use of English in government. In 1967, Gandhi introduced a constitutional amendment that guaranteed the de facto use of both Hindi and English as official languages. This established the official government policy of bilingualism in India and satisfied the non-Hindi-speaking Indian states. Gandhi thus put herself forward as a leader with a pan-Indian vision. Nevertheless, critics alleged that her stance was actually meant to weaken the position of rival Congress leaders from the northern states such as Uttar Pradesh, where there had been strong, sometimes violent, pro-Hindi agitations. Gandhi came out of the language conflicts with the strong support of the south Indian populace.
In the late 1960s and 1970s, Gandhi had the Indian army crush militant Communist uprisings in the Indian state of West Bengal. The communist insurgency in India was completely suppressed during the state of emergency.
Gandhi considered the north-eastern region important, because of its strategic situation. In 1966, the Mizo uprising took place against the government of India and overran almost the whole of the Mizoram region. Gandhi ordered the Indian Army to launch massive retaliatory strikes in response. The rebellion was suppressed with the Indian Air Force carrying out airstrikes in Aizawl; this remains the only instance of India carrying out an airstrike in its own territory. The defeat of Pakistan in 1971 and the secession of East Pakistan as pro-India Bangladesh led to the collapse of the Mizo separatist movement. In 1972, after the less extremist Mizo leaders came to the negotiating table, Gandhi upgraded Mizoram to the status of a union territory. A small-scale insurgency by some militants continued into the late 1970s, but it was successfully dealt with by the government. The Mizo conflict was resolved definitively during the administration of Gandhi's son Rajiv. Today, Mizoram is considered one of the most peaceful states in the north-east.
Responding to the insurgency in Nagaland, Gandhi "unleashed a powerful military offensive" in the 1970s. Finally, a massive crackdown on the insurgents took place during the state of emergency ordered by Gandhi. The insurgents soon agreed to surrender and signed the Shillong Accord in 1975. While the agreement was considered a victory for the Indian government and ended large-scale conflicts, there have since been spurts of violence by rebel holdouts and ethnic conflict amongst the tribes.
Gandhi contributed to, and carried out further, the vision of Jawaharlal Nehru, former premier of India, to develop its nuclear program. Gandhi authorised the development of nuclear weapons in 1967, in response to Test No. 6 by the People's Republic of China. Gandhi saw this test as Chinese nuclear intimidation and promoted Nehru's views to establish India's stability and security interests independent from those of the nuclear superpowers.
The programme became fully mature in 1974, when Dr. Raja Ramanna reported to Gandhi that India had the ability to test its first nuclear weapon. Gandhi gave verbal authorisation for this test, and preparations were made in the Indian Army's Pokhran Test Range. In 1974, India successfully conducted an underground nuclear test, unofficially code named "Smiling Buddha", near the desert village of Pokhran in Rajasthan. As the world was quiet about this test, a vehement protest came from Pakistan as its prime minister, Zulfikar Ali Bhutto, described the test as "Indian hegemony" to intimidate Pakistan. In response to this, Bhutto launched a massive campaign to make Pakistan a nuclear power. Bhutto asked the nation to unite and slogans such as "hum ghaas aur pattay kha lay gay magar nuclear power ban k rhe gay" ("We will eat grass or leaves or even go hungry, but we will get nuclear power") were employed. Gandhi directed a letter to Bhutto, and later to the world, claiming the test was for peaceful purposes and part of India's commitment to develop its programme for industrial and scientific use.
In spite of intense international criticism and steady decline in foreign investment and trade, the nuclear test was popular domestically. The test caused an immediate revival of Gandhi's popularity, which had flagged considerably from its heights after the 1971 war. The overall popularity and image of the Congress Party was enhanced and the Congress Party was well received in the Indian Parliament.
She married Feroze Gandhi at the age of 25, in 1942. Their marriage lasted 18 years until he died of a heart attack in 1960. They had two sons—Rajiv and Sanjay. Initially, her younger son Sanjay had been her chosen heir, but after his death in a flying accident in June 1980, Gandhi persuaded her reluctant elder son Rajiv to quit his job as a pilot and enter politics in February 1981. Rajiv took office as prime minister following his mother's assassination in 1984; he served until December 1989. Rajiv Gandhi himself was assassinated by a suicide bomber working on behalf of LTTE on 21 May 1991.
In 1952 in a letter to her American friend Dorothy Norman, Gandhi wrote: "I am in no sense a feminist, but I believe in women being able to do everything ... Given the opportunity to develop, capable Indian women have come to the top at once." While this statement appears paradoxical, it reflects Gandhi's complex feelings toward her gender and feminism. Her egalitarian upbringing with her cousins helped contribute to her sense of natural equality. "Flying kites, climbing trees, playing marbles with her boy cousins, Indira said she hardly knew the difference between a boy and a girl until the age of twelve."
Gandhi did not often discuss her gender, but she did involve herself in women's issues before becoming the prime minister. Before her election as prime minister, she became active in the organisational wing of the Congress party, working in part in the Women's Department. In 1956, Gandhi had an active role in setting up the Congress Party's Women's Section. Unsurprisingly, a lot of her involvement stemmed from her father. As an only child, Gandhi naturally stepped into the political light. And, as a woman, she naturally helped head the Women's section of the Congress Party. She often tried to organise women to involve themselves in politics. Although rhetorically Gandhi may have attempted to separate her political success from her gender, Gandhi did involve herself in women's organizations. The political parties in India paid substantial attention to Gandhi's gender before she became prime minister, hoping to use her for political gain. Even though men surrounded Gandhi during her upbringing, she still had a female role model as a child. Several books on Gandhi mention her interest in Joan of Arc. In her own accounts through her letters, she wrote to her friend Dorothy Norman, in 1952 she wrote: "At about eight or nine I was taken to France; Jeanne d'Arc became a great heroine of mine. She was one of the first people I read about with enthusiasm." Another historian recounts Indira's comparison of herself to Joan of Arc: "Indira developed a fascination for Joan of Arc, telling her aunt, 'Someday I am going to lead my people to freedom just as Joan of Arc did'!" Gandhi's linking of herself to Joan of Arc presents a model for historians to assess Gandhi. As one writer said: "The Indian people were her children; members of her family were the only people capable of leading them."
Gandhi had been swept up in the call for Indian independence since she was born in 1917. Thus by 1947, she was already well immersed in politics, and by 1966, when she first assumed the position of prime minister, she had held several cabinet positions in her father's office.
Gandhi's advocacy for women's rights began with her help in establishing the Congress Party's Women's Section. In 1956, she wrote in a letter: "It is because of this that I am taking a much more active part in politics. I have to do a great deal of touring in order to set up the Congress Party Women's Section, and am on numerous important committees." Gandhi spent a great deal of time throughout the 1950s helping to organise women. She wrote to Norman in 1959, irritable that women had organised around the communist cause but had not mobilised for the Indian cause: "The women, whom I have been trying to organize for years, had always refused to come into politics. Now they are out in the field." Once appointed president in 1959, she "travelled relentlessly, visiting remote parts of the country that had never before received a VIP ... she talked to women, asked about child health and welfare, inquired after the crafts of the region" Gandhi's actions throughout her ascent to power clearly reflect a desire to mobilise women. Gandhi did not see the purpose of feminism. She saw her own success as a woman, and also noted that: "Given the opportunity to develop, capable Indian women have come to the top at once."
Gandhi felt guilty about her inability to fully devote her time to her children. She noted that her main problem in office was how to balance her political duties with tending to her children, and "stressed that motherhood was the most important part of her life." At another point, she went into more detail: "To a woman, motherhood is the highest fulfilment ... To bring a new being into this world, to see its perfection and to dream of its future greatness is the most moving of all experiences and fills one with wonder and exaltation."
Her domestic initiatives did not necessarily reflect favourably on Indian women. Gandhi did not make a special effort to appoint women to cabinet positions. She did not appoint any women to full cabinet rank during her terms in office. Yet despite this, many women saw Gandhi as a symbol for feminism and an image of women's power.
American veteran politican Henry A. Kissinger had described Indira Gandhi as an "Iron lady". After leading India to victory against Pakistan in the Bangladesh Liberation War in 1971, President V. V. Giri awarded Gandhi with India's highest civilian honour, the Bharat Ratna.
In 2011, the Bangladesh Freedom Honour, Bangladesh's highest civilian award for foreign nationals, was posthumously conferred on Gandhi for her "outstanding contributions" to Bangladesh's Liberation War.
Gandhi's main legacy was standing firm in the face of American pressure to defeat Pakistan and turn East Pakistan into independent Bangladesh. She was also responsible for India joining the group of countries with nuclear weapons. Although India being officially part of the Non-Aligned Movement, she gave Indian foreign policy a tilt towards the Soviet bloc.
In 1999, Gandhi was named "Woman of the Millennium" in an online poll organised by the BBC. In 2012, she was ranked number seven on Outlook India's poll of the Greatest Indian.
Being at the forefront of Indian politics for decades, Gandhi left a powerful legacy on Indian politics. Similarly, some of her actions have also caused controversies. One of the criticism concerns her rule to have damaged internal party democracy in the Congress party. Her detractors accuse her of weakening State chief ministers and thereby weakening the federal structure, weakening the independence of the judiciary, and weakening her cabinet by vesting power in her secretariat and her sons. Gandhi is also associated with fostering a culture of nepotism in Indian politics and in India's institutions. She is also almost singularly associated with the period of Emergency rule, described by some as a "dark period" in Indian democracy. The Forty-second Amendment of the Constitution of India which was adopted during the emergency can also be regarded as part of her legacy. Although judicial challenges and non-Congress governments tried to water down the amendment, the amendment still stands.
She remains the only woman to occupy the office of the prime minister of India. In 2020, Gandhi was named by Time magazine among the world's 100 powerful women who defined the last century. Shakti Sthal whose name literally translates to place of strength is a monument to her.
While portrayals of Indira Gandhi by actors in Indian cinema have generally been avoided, with filmmakers using back-shots, silhouettes and voiceovers to give impressions of her character, several films surrounding her tenure, policies or assassination have been made.
These include Aandhi (1975) by Gulzar, Kissa Kursi Ka (1975) by Amrit Nahata, Nasbandi (1978) by I. S. Johar, Maachis (1996) by Gulzar, Hazaaron Khwaishein Aisi (2003) by Sudhir Mishra, Hawayein (2003) by Ammtoje Mann, Des Hoyaa Pardes (2004) by Manoj Punj, Kaya Taran (2004) by Sashi Kumar, Amu (2005) by Shonali Bose, Kaum De Heere (2014) by Ravinder Ravi, 47 to 84 (2014) by Rajiv Sharma, Punjab 1984 (2014) by Anurag Singh, The Fourth Direction (2015) by Gurvinder Singh, Dharam Yudh Morcha (2016) by Naresh S. Garg, 31 October (2016) by Shivaji Lotan Patil, Baadshaho (2017) by Milan Luthria, Toofan Singh (2017) by Baghal Singh, Sonchiriya (2019) by Abhishek Chaubey, Shukranu (2020) by Bishnu Dev Halder. Aandhi, Kissa Kursi Ka and Nasbandi are notable for having been released during Gandhi's lifetime and were subject to censorship on exhibition during the Emergency.
Indus Valley to Indira Gandhi is a 1970 Indian two-part documentary film by S. Krishnaswamy which traces the history of India from the earliest times of the Indus Valley Civilization to the prime ministership of Indira Gandhi. The Films Division of India produced Our Indira, a 1973 short documentary film directed by S.N.S. Sastry showing the beginning of her first tenure as PM and her speeches from the Stockholm Conference.
Pradhanmantri (lit. 'Prime Minister'), a 2013 Indian documentary television series which aired on ABP News and covers the various policies and political tenures of Indian PMs, includes the tenureship of Gandhi in the episodes "Indira Gandhi Becomes PM", "Split in Congress Party", "Story before Indo-Pakistani War of 1971", "Indo-Pakistani War of 1971 and Birth of Bangladesh", "1975–77 State of Emergency in India", and "Indira Gandhi back as PM and Operation Blue Star" with Navni Parihar portraying the role of Gandhi. Parihar also portrays Gandhi in the 2021 Indian film Bhuj: The Pride of India which is based on the 1971 Indo-Pakistani War.
The taboo surrounding the depiction of Indira Gandhi in Indian cinema has begun to dissipate in recent years with actors portraying her in films. Notable portrayals include: Sarita Choudhury in Midnight's Children (2012); Mandeep Kohli in Jai Jawaan Jai Kisaan (2015); Supriya Vinod in Indu Sarkar (2017), NTR: Kathanayakudu/NTR: Mahanayakudu (2019) and Yashwantrao Chavan – Bakhar Eka Vaadalaachi (2014); Flora Jacob in Raid (2018), Thalaivi (2021) and Radhe Shyam (2022), Kishori Shahane in PM Narendra Modi (2019), Avantika Akerkar in Thackeray (2019) and 83 (2021), Supriya Karnik in Main Mulayam Singh Yadav (2021), Lara Dutta in Bell Bottom(2021) and Fatima Sana Shaikh in Sam Bahadur (film)
Book written by Indira Gandhi
Books on Indira Gandhi | [
{
"paragraph_id": 0,
"text": "Indira Priyadarshini Gandhi (Hindi: [ˈɪndɪɾɑː ˈɡɑːndʱi] ; née Nehru; 19 November 1917 – 31 October 1984) was an Indian politician and stateswoman who served as the 3rd Prime Minister of India from 1966 to 1977 and again from 1980 until her assassination in 1984. She was India's first and, to date, only female prime minister, and a central figure in Indian politics as the leader of the Indian National Congress. Gandhi was the daughter of Jawaharlal Nehru, the first prime minister of India, and the mother of Rajiv Gandhi, who succeeded her in office as the country's sixth prime minister. Furthermore, Gandhi's cumulative tenure of 15 years and 350 days makes her the second-longest-serving Indian prime minister after her father. Henry Kissinger described her as an \"Iron Lady\", a nickname that became associated with her tough personality since her lifetime.",
"title": ""
},
{
"paragraph_id": 1,
"text": "During Nehru's premiership from 1947 to 1964, Gandhi served as his hostess and accompanied him on his numerous foreign trips. In 1959, she played a part in the dissolution of the communist-led Kerala state government as then-president of the Indian National Congress, otherwise a ceremonial position to which she was elected earlier that year. Lal Bahadur Shastri, who had succeeded Nehru as prime minister upon his death in 1964, appointed her minister of information and broadcasting in his government; the same year she was elected to the Rajya Sabha, the upper house of the Indian Parliament. On Shastri's sudden death in January 1966, Gandhi defeated her rival, Morarji Desai, in the Congress Party's parliamentary leadership election to become leader and also succeeded Shastri as prime minister. She led the Congress to victory in two subsequent elections, starting with the 1967 general election, in which she was first elected to the lower house of the Indian parliament, the Lok Sabha. In 1971, the Congress Party headed by Gandhi managed to secure its first landslide victory since her father's sweep in 1962, focusing on issues such as poverty. But following the nationwide Emergency implemented by her, she faced massive anti-incumbency and lost the 1977 general election, the first time for the Congress party to do so. Gandhi was ousted from office and even lost her seat in parliament in the election. Nevertheless, her faction of the Congress Party won the next general election by a landslide, due to Gandhi's leadership and weak governance of the Janata Party rule, the first non-Congress government in independent modern India's history.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As prime minister, Gandhi was known for her political intransigence and unprecedented centralization of power. In 1967, she headed a military conflict with China in which India successfully repelled Chinese incursions in the Himalayas. In 1971, she went to war with Pakistan in support of the independence movement and war of independence in East Pakistan, which resulted in an Indian victory and the creation of Bangladesh, as well as increasing India's influence to the point where it became the sole regional power in South Asia. Gandhi's rule saw India grow closer to the Soviet Union by signing a friendship treaty in 1971, with India receiving military, financial, and diplomatic support from the Soviet Union during its conflict with Pakistan in the same year. Despite India being at the forefront of the non-aligned movement, Gandhi led India to become one of the Soviet Union's closest allies in Asia, with India and the Soviet Union often supporting each other in proxy wars and at the United Nations. Citing separatist tendencies and in response to a call for revolution, Gandhi instituted a state of emergency from 1975 to 1977, during which basic civil liberties were suspended and the press was censored. Widespread atrocities were carried out during that period. Gandhi faced the growing Sikh separatism throughout her third premiership; in response, she ordered Operation Blue Star, which involved military action in the Golden Temple and resulted in bloodshed with hundreds of Sikhs killed. On 31 October 1984, Gandhi was assassinated by her bodyguards, both of whom were Sikh nationalists seeking retribution for the events at the temple.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Indira Gandhi is remembered as the most powerful woman in the world during her tenure. Her supporters cite her leadership during victories over geopolitical rivals China and Pakistan, the Green Revolution, a growing economy in the early 1980s, and her anti-poverty campaign that led her to be known as \"Mother Indira\" (a pun on Mother India) among the country's poor and rural classes. However, critics note her authoritarian rule of India during the Emergency. In 1999, Gandhi was named \"Woman of the Millennium\" in an online poll organized by the BBC. In 2020, Gandhi was named by Time magazine among the 100 women who defined the past century as counterparts to the magazine's previous choices for Man of the Year.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Indira Gandhi was born Indira Nehru, into a Kashmiri Pandit family on 19 November 1917 in Allahabad. Her father, Jawaharlal Nehru, was a leading figure in the movement for independence from British rule, and became the first Prime Minister of the Dominion (and later Republic) of India. She was the only child (she had a younger brother who died young), and grew up with her mother, Kamala Nehru, at the Anand Bhavan, a large family estate in Allahabad. In 1930, the Nehru family donated the mansion to the Indian National Congress, and renamed it Swaraj Bhavan (lit. abode of freedom). A new mansion was built nearby to serve as the family residence and given the name of the old Anand Bhavan. Indira had a lonely and unhappy childhood. Her father was often away, directing political activities or incarcerated, while her mother was frequently bedridden with illness, and later suffered an early death from tuberculosis. She had limited contact with her father, mostly through letters.",
"title": "Early life and career"
},
{
"paragraph_id": 5,
"text": "Indira was taught mostly at home by tutors and attended school intermittently until matriculation in 1934. She was a student at the Modern School in Delhi, St. Cecilia's and St. Mary's Convent schools in Allahabad, the International School of Geneva, the Ecole Nouvelle in Bex, and the Pupils' Own School in Poona and Bombay, which is affiliated with the University of Mumbai. She and her mother Kamala moved to the Belur Math headquarters of the Ramakrishna Mission where Swami Ranganathananda was her guardian. She went on to study at the Vishwa Bharati in Santiniketan, which became Visva-Bharati University in 1951. It was during her interview with him that Rabindranath Tagore named her Priyadarshini, literally \"looking at everything with kindness\" in Sanskrit, and she came to be known as Indira Priyadarshini Nehru. A year later, however, she had to leave university to attend to her ailing mother in Europe. There it was decided that Indira would continue her education at the University of Oxford. After her mother died, she attended the Badminton School for a brief period before enrolling at Somerville College in 1937 to study history. Indira had to take the entrance examination twice, having failed at her first attempt with a poor performance in Latin. At Oxford, she did well in history, political science and economics, but her grades in Latin—a compulsory subject—remained poor. Indira did, however, have an active part within the student life of the university, such as membership in the Oxford Majlis Asian Society.",
"title": "Early life and career"
},
{
"paragraph_id": 6,
"text": "During her time in Europe, Indira was plagued with ill health and was constantly attended to by doctors. She had to make repeated trips to Switzerland to recover, disrupting her studies. She was being treated there in 1940, when Germany rapidly conquered Europe. Indira tried to return to England through Portugal but was left stranded for nearly two months. She managed to enter England in early 1941, and from there returned to India without completing her studies at Oxford. The university later awarded her an honorary degree. In 2010, Oxford honoured her further by selecting her as one of the ten Oxasians, illustrious Asian graduates from the University of Oxford. During her stay in Britain, Indira frequently met her future husband Feroze Gandhi (no relation to Mahatma Gandhi), whom she knew from Allahabad, and who was studying at the London School of Economics. Their marriage took place in Allahabad according to Adi Dharm rituals, though Feroze belonged to a Zoroastrian Parsi family of Gujarat. The couple had two sons, Rajiv Gandhi (born 1944) and Sanjay Gandhi (born 1946).",
"title": "Early life and career"
},
{
"paragraph_id": 7,
"text": "On September 1942, Indira was arrested over her role in the Quit India Movement. She was released from jail in April 1943. \"Mud entered our souls in the drabness of prison,\" she later recalled her time in the jail. She added, \"When I came out, it was such a shock to see colors again I thought I would go out of my mind.\"",
"title": "Early life and career"
},
{
"paragraph_id": 8,
"text": "In the 1950s, Indira, now Mrs. Indira Gandhi after her marriage, served her father unofficially as a personal assistant during his tenure as the first prime minister of India. Towards the end of the 1950s, Gandhi served as the President of the Congress. In that capacity, she was instrumental in getting the communist-led Kerala state government dismissed in 1959. That government was India's first elected communist government. After her father's death in 1964 she was appointed a member of the Rajya Sabha (upper house) and served in Prime Minister Lal Bahadur Shastri's cabinet as Minister of Information and Broadcasting. In January 1966, after Shastri's death, the Congress legislative party elected her over Morarji Desai as their leader. Congress party veteran K. Kamaraj was instrumental in Gandhi achieving victory. Because she was a woman, other political leaders in India saw Gandhi as weak and hoped to use her as a puppet once elected:",
"title": "Early life and career"
},
{
"paragraph_id": 9,
"text": "Congress President Kamaraj orchestrated Mrs. Gandhi's selection as prime minister because he perceived her to be weak enough that he and the other regional party bosses could control her, and yet strong enough to beat Desai [her political opponent] in a party election because of the high regard for her father ... a woman would be an ideal tool for the Syndicate.",
"title": "Early life and career"
},
{
"paragraph_id": 10,
"text": "Gandhi's first eleven years serving as prime minister saw her evolve from the perception of Congress party leaders as their puppet, to a strong leader with the iron resolve to split the party over her policy positions, or to go to war with Pakistan to assist Bangladesh in the 1971 liberation war. At the end of 1977, she was such a dominating figure in Indian politics that Congress party president D. K. Barooah had coined the phrase \"India is Indira and Indira is India.\"",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 11,
"text": "Gandhi formed her government with Morarji Desai as deputy prime minister and finance minister. At the beginning of her first term as prime minister, she was widely criticised by the media and the opposition as a \"Goongi goodiya\" (Hindi for a \"dumb doll\") of the Congress party bosses who had orchestrated her election and then tried to constrain her. Indira was a reluctant successor to her famed father, although she had accompanied him on several official foreign visits and played an anchor role in bringing down the first democratically elected communist government in Kerala. According to certain sources it was the socialist leader Ram Manohar Lohia that first derided her personality as the \"Goongi Goodiya\" (Hindi for \"dumb doll\") that later was echoed by other Congress politicians who were wary of her rise in the party.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 12,
"text": "One of her first major action was to crush the separatist Mizo National Front uprising in Mizoram in 1966.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 13,
"text": "The first electoral test for Gandhi was the 1967 general elections for the Lok Sabha and state assemblies. The Congress Party won a reduced majority in the Lok Sabha after these elections owing to widespread disenchantment over the rising prices of commodities, unemployment, economic stagnation and a food crisis. Gandhi was elected to the Lok Sabha from the Raebareli constituency. She had a rocky start after agreeing to devalue the rupee which created hardship for Indian businesses and consumers. The importation of wheat from the United States fell through due to political disputes.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 14,
"text": "For the first time, the party also lost power or lost its majority in a number of states across the country. Following the 1967 elections, Gandhi gradually began to move towards socialist policies. In 1969, she fell out with senior Congress party leaders over several issues. Chief among them was her decision to support V. V. Giri, the independent candidate rather than the official Congress party candidate Neelam Sanjiva Reddy for the vacant position of president of India. The other was the announcement by the prime minister of Bank nationalisation without consulting the finance minister, Morarji Desai. These steps culminated in party president S. Nijalingappa expelling her from the party for indiscipline. Gandhi, in turn, floated her own faction of the Congress party and managed to retain most of the Congress MPs on her side with only 65 on the side of the Congress (O) faction. The Gandhi faction, called Congress (R), lost its majority in the parliament but remained in power with the support of regional parties such as DMK. The policies of the Congress under Gandhi, before the 1971 elections, also included proposals for the abolition of the Privy Purse to former rulers of the princely states and the 1969 nationalization of the fourteen largest banks in India.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 15,
"text": "In 1967, a military conflict alongside the border of the Himalayan Kingdom of Sikkim, then an Indian protectorate, broke out between India and China. India emerged as the victor by successfully repelling Chinese attacks and forced the subsequent withdrawal of Chinese forces from the region.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 16,
"text": "Throughout the conflict, the Indian losses were 88 killed and 163 wounded while Chinese casualties stood at 340 killed and 450 wounded, according to the Indian Defense Ministry. Chinese sources made no declarations of casualties but alleged India to be the aggressor.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 17,
"text": "In December 1967, Indira Gandhi remarked these developments that \"China continues to maintain an attitude of hostility towards us and spares no opportunity to malign us and to carry on anti-Indian propaganda not only against the Indian Government but the whole way of our democratic functioning.\"",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 18,
"text": "In 1975, Gandhi incorporated Sikkim into India, after a referendum in which a majority of Sikkimese voted to join India. This move was condemned as a \"despicable act of the Indian Government\" by China. Chinese government mouthpiece China Daily wrote that \"the Nehrus, father and daughter, had always acted in this way, and Indira Gandhi had gone further\".",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 19,
"text": "Garibi Hatao (Remove Poverty) was the resonant theme for Gandhi's 1971 political bid. The slogan was developed in response to the combined opposition alliance's use of the two-word manifesto—\"Indira Hatao\" (Remove Indira). The Garibi Hatao slogan and the proposed anti-poverty programs that came with it were designed to give Gandhi independent national support, based on the rural and urban poor. This would allow her to bypass the dominant rural castes both in and of state and local governments as well as the urban commercial class. For their part, the previously voiceless poor would at last gain both political worth and political weight. The programs created through Garibi Hatao, though carried out locally, were funded and developed by the Central Government in New Delhi. The program was supervised and staffed by the Indian National Congress party. \"These programs also provided the central political leadership with new and vast patronage resources to be disbursed ... throughout the country.\"",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 20,
"text": "The Congress government faced numerous problems during this term. Some of these were due to high inflation which in turn was caused by wartime expenses, drought in some parts of the country and, more importantly, the 1973 oil crisis. Opposition to her in the 1973–75 period, after the Gandhi wave had receded, was strongest in the states of Bihar and Gujarat. In Bihar, Jayaprakash Narayan, the veteran leader came out of retirement to lead the protest movement there.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 21,
"text": "Gandhi's biggest achievement following the 1971 election came in December 1971 with India's decisive victory over Pakistan in the Indo-Pakistani War. That victory occurred in the last two weeks of the Bangladesh Liberation War, which led to the formation of independent Bangladesh. An insurgency in East Pakistan (now Bangladesh) formed in early 1971, with Bengali's and East Pakistanis revolting against authoritarian rule from the central West Pakistan Government. In response, Pakistani security forces launched the infamous Operation Searchlight, in which Pakistan committed genocide among Bengali Hindus, nationalists and intelligentsia. Gandhi's India was initially restrained from intervening in the insurgency but quickly started to support Bengali rebels through the provision of military supplies. Indian forces clashed multiple times with Pakistani forces in the Eastern border. At one point, Indian forces along with Mukti Bahini rebels allied together and attacked Pakistani forces at Dhalai. The attack, supported and later successfully executed by India, was done to stop Pakistani cross-border shelling. The battle occurred more than a month before India's official intervention in December. Gandhi quickly dispatched more troops to the Eastern border with East Pakistan, hoping to support Mukti Bahini rebels and cease any Pakistani infiltration. Indian forces then clashed again with Pakistani forces after Indian forces crossed the border and secured Garibpur after a one day battle lasting from 20 November 1971 to the 21st. The next day, on 22 November, Indian and Pakistani aircraft engaged in a dogfight over the Boyra Salient, in which thousands of people watched as 4 Indian Folland Gnats shot down 2 Pakistani Canadair Sabres and damaged another. Both Pakistani pilots that were shot down were captured as prisoners of war. The Battle of Boyra instantly made the 4 Indian pilots celebrities and created large-scale nationalism as the Bangladesh Liberation War saw more and more Indian intervention and escalation. Other clashes also happened on the same day but did not receive as much media attention as did the battle of Boyra and Garibpur. On 3 December 1971, the Pakistan Air Force launched Operation Chengiz Khan, which saw Pakistani aircraft attacking Indian airbases and military installations across the Western border in a pre-emptive strike. The initial night-time attack by Pakistani forces was foiled, failing to inflict any major damage on Indian airbases, allowing Indian aircraft to counterattack into West Pakistan. Gandhi quickly declared a state of emergency and addressed the nation on radio shortly after midnight, stating: \"We must be prepared for a long period of hardship and sacrifice.\"",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 22,
"text": "Both countries mobilized for war and Gandhi ordered full-out war, ordering an invasion into East Pakistan. Pakistan's Navy had not improved since the 1965 war, while the Pakistani airforce could not launch attacks on the same scale as the Indian airforce. The Pakistan Army quickly attempted major land operations on the Western border, but most of these attacks besides some in Kashmir stalled, and allowed Indian counterattacks to gain land. The Pakistan Army lacked wide-scale organization which contributed to miscommunication and high casualties in the Western front.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 23,
"text": "In the Eastern Front of the war, Indian generals opted for a high speed lightning war, using mechanized and airborne units to quickly bypass Pakistani opposition and make quick strides towards the capital of East Pakistan, Dhaka. Jagjit Singh Aurora (who later became a critic of Gandhi in 1984) led Indian Army's Eastern Command. The Indian Air Force quickly overcame the small contingent of Pakistani aircraft in East Pakistan, allowing for air superiority over the region. Indian forces liberated Jessore and several other towns during the Battle of Sylhet between 7 December and 15 December 1971, which saw India conduct its first heliborne operation. India then conducted another airdrop on December 9, with Indian forces led by Major General Sagat Singh capturing just under 5,000 Pakistani POWs and also crossing the Meghna River towards Dhaka. Two days later, Indian forces conducted the largest airborne operation since World War Ii. 750 men of the Army's Parachute Regiment landed in Tangail and defeated the Pakistani forces in the area, securing a direct route to Dhaka. Little Pakistani forces escaped the battle with only 900 out of 7000 soldiers retreating back to Dhaka alive. By December 12, Indian forces had reached the outskirts of Dhaka and had prepared to besiege the capital. Indian heavy artillery arrived by the 14th, and shelled the city.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 24,
"text": "As surrender became apparent by 14 December 1971, Pakistani paramilitaries and militia roamed the streets of Dhaka during the night, kidnapping, torturing and then executing any educated Bengali who was viewed as someone who could lead Bangladesh once Pakistan surrendered. Over 200 of these people were killed on the 14th. By 16 December, Pakistani morale had reached a low point, with the Indian Army finally encircling Dhaka and besieging the city. On the 16th, Indian forces issued a 30-minute ultimatum for the city to surrender. Seeing that the city's defences paled in comparison to the Mukti Bahini and Indian forces outside the city, Lt-Gen. A.A.K. Niazi (Cdr. of Eastern Command) and his deputy, V-Adm. M.S. Khan surrendered the city without resistance. BBC News captured the moment of surrender as Indian soldiers from the Parachute Regiment streamed into the city. As Indian forces and Mukti Bahini rounded up the remaining Pakistani forces, Lieutenant General Jagjit Singh Aurora of India and A.A.K. Niazi of Pakistan signed the Pakistani Instrument of Surrender at 16:31Hrs IST on 16 December 1971. The surrender signified the collapse of the East Pakistan Government along with the end of the war. 93,000 soldiers of the Pakistani security forces surrendered, the largest surrender since World War II. The entire four-tiered military surrendered to India along with its officers and generals. Large crowds flooded the scenes as anti-Pakistani slogans emerged and Pakistani POWs were beaten by the locals. Eventually, Indian officers formed a human-chain to protect Pakistani POWs and Niazi from being lynched by the belligerent locals. Most of the 93,000 captured were Pakistan Army officers or paramilitary officers, along with 12,000 supporters (razakars). Hostilities officially ended on 17 December 1971. 8,000 Pakistani soldiers were killed along with 25,000 wounded; Indian forces suffered only 3,000 dead and 12,000 wounded. India claimed to have captured 3.6k square kilometres of Pakistani land on the Western Front while losing 126 square kilometres of land to Pakistan.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 25,
"text": "Gandhi was hailed as Goddess Durga by the people as well as the opposition leaders at the time when India defeated Pakistan in the war. In the elections held for State assemblies across India in March 1972, the Congress (R) swept to power in most states riding on the post-war \"Indira wave\".",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 26,
"text": "On 12 June 1975, the Allahabad High Court declared Indira Gandhi's election to the Lok Sabha in 1971 void on the grounds of electoral malpractice. In an election petition filed by her 1971 opponent, Raj Narain (who later defeated her in the 1977 parliamentary election running in the Raebareli constituency), alleged several major as well as minor instances of the use of government resources for campaigning. Gandhi had asked one of her colleagues in government, Ashoke Kumar Sen, to defend her in court. She gave evidence in her defence during the trial. After almost four years, the court found her guilty of dishonest election practices, excessive election expenditure, and of using government machinery and officials for party purposes. The judge, however, rejected the more serious charges of bribery, laid against her in the case.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 27,
"text": "The court ordered her stripped of her parliamentary seat and banned her from running for any office for six years. As the constitution requires that the Prime Minister must be a member of either the Lok Sabha or the Rajya Sabha, the two houses of the Parliament of India, she was effectively removed from office. However, Gandhi rejected calls to resign. She announced plans to appeal to the Supreme Court and insisted that the conviction did not undermine her position. She said: \"There is a lot of talk about our government not being clean, but from our experience the situation was very much worse when [opposition] parties were forming governments.\" And she dismissed criticism of the way her Congress Party raised election campaign money, saying all parties used the same methods. The prime minister retained the support of her party, which issued a statement backing her.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 28,
"text": "After news of the verdict spread, hundreds of supporters demonstrated outside her house, pledging their loyalty. Indian High Commissioner to the United Kingdom Braj Kumar Nehru said Gandhi's conviction would not harm her political career. \"Mrs Gandhi has still today overwhelming support in the country,\" he said. \"I believe the prime minister of India will continue in office until the electorate of India decides otherwise\".",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 29,
"text": "Gandhi moved to restore order by ordering the arrest of most of the opposition participating in the unrest. Her Cabinet and government recommended that then President Fakhruddin Ali Ahmed declare a state of emergency because of the disorder and lawlessness following the Allahabad High Court decision. Accordingly, Ahmed declared a State of Emergency caused by internal disorder, based on the provisions of Article 352(1) of the Constitution, on 25 June 1975. At the time of Emergency, There was a widespread rumour that Indira had ordered her search guards to eliminate firebrand trade unionist and socialist party leader George Fernandes, while he was on a run. Few International organisations and Government officials issued request letters to Indira Gandhi pleading her to relinquish such decrees. Fernandes had called a nationwide railway strike in 1974, that shut the railways for three weeks and became the largest industrial action in Asia. Indira had turned furious over him and the strike was massively cracked down.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 30,
"text": "Within a few months, President's rule was imposed on the two opposition party ruled states of Gujarat and Tamil Nadu thereby bringing the entire country under direct Central rule or by governments led by the ruling Congress party. Police were granted powers to impose curfews and detain citizens indefinitely; all publications were subjected to substantial censorship by the Ministry of Information and Broadcasting. Finally, the impending legislative assembly elections were postponed indefinitely, with all opposition-controlled state governments being removed by virtue of the constitutional provision allowing for a dismissal of a state government on the recommendation of the state's governor.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 31,
"text": "Indira Gandhi used the emergency provisions to change conflicting party members:",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 32,
"text": "Unlike her father Jawaharlal Nehru, who preferred to deal with strong chief ministers in control of their legislative parties and state party organizations, Mrs. Gandhi set out to remove every Congress chief minister who had an independent base and to replace each of them with ministers personally loyal to her...Even so, stability could not be maintained in the states...",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 33,
"text": "President Ahmed issued ordinances that did not require debate in the Parliament, allowing Gandhi to rule by decree.",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 34,
"text": "The Emergency saw the entry of Gandhi's younger son, Sanjay Gandhi, into Indian politics. He wielded tremendous power during the emergency without holding any government office. According to Mark Tully, \"His inexperience did not stop him from using the Draconian powers his mother, Indira Gandhi, had taken to terrorise the administration, setting up what was in effect a police state.\"",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 35,
"text": "It was said that during the Emergency he virtually ran India along with his friends, especially Bansi Lal. It was also quipped that Sanjay Gandhi had total control over his mother and that the government was run by the PMH (Prime Minister House) rather than the PMO (Prime Minister Office).",
"title": "First, second and third term as prime minister between 1966 and 1977"
},
{
"paragraph_id": 36,
"text": "In 1977, after extending the state of emergency twice, Gandhi called elections to give the electorate a chance to vindicate her rule. She may have grossly misjudged her popularity by reading what the heavily censored press wrote about her. She was opposed by the Janata alliance of Opposition parties. The alliance was made up of Bharatiya Jana Sangh, Congress (O), The Socialist parties, and Charan Singh's Bharatiya Kranti Dal representing northern peasants and farmers. The Janata alliance, with Jai Prakash Narayan as its spiritual guide, claimed the elections were the last chance for India to choose between \"democracy and dictatorship\". The Congress Party split during the election campaign of 1977: veteran Gandhi supporters like Jagjivan Ram, Hemvati Nandan Bahuguna and Nandini Satpathy were compelled to part ways and form a new political entity, the CFD (Congress for Democracy), due primarily to intra-party politicking and the circumstances created by Sanjay Gandhi. The prevailing rumour was that he intended to dislodge Gandhi, and the trio stood to prevent that. Gandhi's Congress party was soundly crushed in the elections. The Janata Party's democracy or dictatorship claim seemed to resonate with the public. Gandhi and Sanjay Gandhi lost their seats, and Congress was reduced to 153 seats (compared with 350 in the previous Lok Sabha), 92 of which were in the South. The Janata alliance, under the leadership of Morarji Desai, came to power after the State of Emergency was lifted. The alliance parties later merged to form the Janata Party under the guidance of Gandhian leader, Jayaprakash Narayan. The other leaders of the Janata Party were Charan Singh, Raj Narain, George Fernandes and Atal Bihari Vajpayee.",
"title": "1977 election and opposition years"
},
{
"paragraph_id": 37,
"text": "After the humiliating defeat in the election, the king of Nepal, through an intermediatory offered her and her family to shift to Nepal. She refeused to shift herself, but was open to move her two sons Sanjay Gandhi and Rajiv Gandhi. However, after consulting with Kao, she declined the offer altogether keeping in view of her future political career.",
"title": "1977 election and opposition years"
},
{
"paragraph_id": 38,
"text": "Since Gandhi had lost her seat in the election, the defeated Congress party appointed Yashwantrao Chavan as their parliamentary party leader. Soon afterwards, the Congress party split again with Gandhi floating her own Congress faction called Congress(I) where I stood for Indira. She won a by-election in the Chikmagalur Constituency and took a seat in the Lok Sabha in November 1978 after the Janata Party's attempts to have Kannada matinee idol Rajkumar run against her failed when he refused to contest the election saying he wanted to remain apolitical. However, the Janata government's home minister, Charan Singh, ordered her arrest along with Sanjay Gandhi on several charges, none of which would be easy to prove in an Indian court. The arrest meant that Gandhi was automatically expelled from Parliament. These allegations included that she \"had planned or thought of killing all opposition leaders in jail during the Emergency\". However, this strategy backfired disastrously. In response to her arrest, Gandhi's supporters hijacked an Indian Airlines jet and demanded her immediate release. Her arrest and long-running trial gained her sympathy from many people. The Janata coalition was only united by its hatred of Gandhi (or \"that woman\" as some called her). The party included right wing Hindu Nationalists, Socialists and former Congress party members. With so little in common, the Morarji Desai government was bogged down by infighting. In 1979, the government began to unravel over the issue of the dual loyalties of some members to Janata and the Rashtriya Swayamsevak Sangh (RSS)—the Hindu nationalist, paramilitary organisation. The ambitious Union finance minister, Charan Singh, who as the Union home minister during the previous year had ordered the Gandhi's' arrests, took advantage of this and started courting Indira and Sanjay. After a significant exodus from the party to Singh's faction, Desai resigned in July 1979. Singh was appointed prime minister, by President Reddy, after Gandhi and Sanjay Gandhi promised Singh that Congress (I) would support his government from outside on certain conditions. The conditions included dropping all charges against Gandhi and Sanjay. Since Singh refused to drop them, Congress (I) withdrew its support and President Reddy dissolved Parliament in August 1979.",
"title": "1977 election and opposition years"
},
{
"paragraph_id": 39,
"text": "Before the 1980 elections Gandhi approached the then Shahi Imam of Jama Masjid, Syed Abdullah Bukhari and entered into an agreement with him on the basis of 10-point programme to secure the support of the Muslim votes. In the elections held in January, Congress (I) under Indira's leadership returned to power with a landslide majority.",
"title": "1977 election and opposition years"
},
{
"paragraph_id": 40,
"text": "The Congress Party under Gandhi swept back into power in January 1980. In this election, Gandhi was elected by the voters of the Medak constituency. On 23 June, Sanjay was killed in a plane crash while performing an aerobatic manoeuvre in New Delhi. In 1980, as a tribute to her son's dream of launching an indigenously manufactured car, Gandhi nationalized Sanjay's debt-ridden company, Maruti Udyog, for Rs. 43,000,000 (4.34 crore) and invited joint venture bids from automobile companies around the world. Suzuki of Japan was selected as the partner. The company launched its first Indian-manufactured car in 1984.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 41,
"text": "By the time of Sanjay's death, Gandhi trusted only family members, and therefore persuaded her reluctant son, Rajiv, to enter politics.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 42,
"text": "Her PMO office staff included H. Y. Sharada Prasad as her information adviser and speechwriter.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 43,
"text": "Following the 1977 elections, a coalition led by the Sikh-majority Akali Dal came to power in the northern Indian state of Punjab. In an effort to split the Akali Dal and gain popular support among the Sikhs, Gandhi's Congress Party helped to bring the orthodox religious leader Jarnail Singh Bhindranwale to prominence in Punjab politics. Later, Bhindranwale's organisation, Damdami Taksal, became embroiled in violence with another religious sect called the Sant Nirankari Mission, and he was accused of instigating the murder of Jagat Narain, the owner of the Punjab Kesari newspaper. After being arrested over this matter, Bhindranwale disassociated himself from the Congress Party and joined Akali Dal. In July 1982, he led the campaign for the implementation of the Anandpur Resolution, which demanded greater autonomy for the Sikh-majority state. Meanwhile, a small group of Sikhs, including some of Bhindranwale's followers, turned to militancy after being targeted by government officials and police for supporting the Anandpur Resolution. In 1982, Bhindranwale and approximately 200 armed followers moved into a guest house called the Guru Nanak Niwas near the Golden Temple.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 44,
"text": "By 1983, the Temple complex had become a fort for many militants. The Statesman later reported that light machine guns and semi-automatic rifles were known to have been brought into the compound. On 23 April 1983, the Punjab Police Deputy Inspector General A. S. Atwal was shot dead as he left the Temple compound. The following day, Harchand Singh Longowal (then president of Akali Dal) confirmed the involvement of Bhindranwale in the murder.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 45,
"text": "After several futile negotiations, in June 1984, Gandhi ordered the Indian army to enter the Golden Temple to remove Bhindranwale and his supporters from the complex. The army used heavy artillery, including tanks, in the action code-named Operation Blue Star. The operation badly damaged or destroyed parts of the Temple complex, including the Akal Takht shrine and the Sikh library. It also led to the deaths of many Sikh fighters and innocent pilgrims. The number of casualties remains disputed with estimates ranging from many hundreds to many thousands.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 46,
"text": "Gandhi was accused of using the attack for political ends. Harjinder Singh Dilgeer stated that she attacked the temple complex to present herself as a great hero in order to win the general elections planned towards the end of 1984. There was fierce criticism of the action by Sikhs in India and overseas. There were also incidents of mutiny by Sikh soldiers in the aftermath of the attack.",
"title": "1980 elections and fourth term"
},
{
"paragraph_id": 47,
"text": "\"I am alive today, I may not be there tomorrow ... I shall continue to serve until my last breath and when I die, I can say, that every drop of my blood will invigorate India and strengthen it ... Even if I died in the service of the nation, I would be proud of it. Every drop of my blood ... will contribute to the growth of this nation and to make it strong and dynamic.\"",
"title": "Assassination"
},
{
"paragraph_id": 48,
"text": "—Gandhi's remarks on her last speech a day before her death (30 October 1984) at the then Parade Ground, Odisha.",
"title": "Assassination"
},
{
"paragraph_id": 49,
"text": "On 31 October 1984, two of Gandhi's Sikh bodyguards, Satwant Singh and Beant Singh, shot her with their service weapons in the garden of the prime minister's residence at 1 Safdarjung Road, New Delhi, allegedly in revenge for Operation Blue Star. The shooting occurred as she was walking past a wicket gate guarded by the two men. She was to be interviewed by the British filmmaker Peter Ustinov, who was filming a documentary for Irish television. Beant shot her three times using his side-arm; Satwant fired 30 rounds. The men dropped their weapons and surrendered. Afterwards, they were taken away by other guards into a closed room where Beant was shot dead. Kehar Singh was later arrested for conspiracy in the attack. Both Satwant and Kehar were sentenced to death and hanged in Delhi's Tihar Jail.",
"title": "Assassination"
},
{
"paragraph_id": 50,
"text": "Gandhi was taken to the All India Institutes of Medical Sciences at 9:30 AM where doctors operated on her. She was declared dead at 2:20 PM. The post-mortem examination was conducted by a team of doctors headed by Tirath Das Dogra. Dogra stated that Gandhi had sustained as many as 30 bullet wounds, from two sources: a Sten submachine gun and a .38 Special revolver. The assailants had fired 31 bullets at her, of which 30 hit her; 23 had passed through her body while seven remained inside her. Dogra extracted bullets to establish the make of the weapons used and to match each weapon with the bullets recovered by ballistic examination. The bullets were matched with their respective weapons at the Central Forensic Science Laboratory (CFSL) Delhi. Subsequently, Dogra appeared in Shri Mahesh Chandra's court as an expert witness (PW-5); his testimony took several sessions. The cross examination was conducted by Shri Pran Nath Lekhi, the defence counsel. Salma Sultan provided the first news of her assassination on Doordarshan's evening news on 31 October 1984, more than 10 hours after she was shot.",
"title": "Assassination"
},
{
"paragraph_id": 51,
"text": "Gandhi was cremated in accordance with Hindu tradition on 3 November near Raj Ghat. The site where she was cremated is known today as Shakti Sthal. In order to pay homage, Gandhi's body lay in state at Teen Murti House. Thousands of followers strained for a glimpse of the cremation. Her funeral was televised live on domestic and international stations, including the BBC. After her death, the Parade Ground was converted to the Indira Gandhi Park which was inaugurated by her son, Rajiv Gandhi.",
"title": "Assassination"
},
{
"paragraph_id": 52,
"text": "Gandhi's assassination dramatically changed the political landscape. Rajiv succeeded his mother as Prime Minister within hours of her murder and anti-Sikh riots erupted, lasting for several days and killing more than 3,000 Sikhs in New Delhi and an estimated 8,000 across India. Many Congress leaders were believed to be behind the anti-Sikh massacre.",
"title": "Assassination"
},
{
"paragraph_id": 53,
"text": "Gandhi's death was mourned worldwide. World leaders condemned the assassination and said her death would leave a 'big emptiness' in international affairs. In Moscow, Soviet President Konstantin Chernenko sent condolences stating, \"The Soviet people learned with pain and sorrow about the untimely death in a villainous assassination of the glorious daughter of the great Indian people, a fiery fighter for peace and security of peoples and a great friend of the Soviet Union\". President Ronald Reagan, along with Secretary of State George Shultz, visited the Indian Embassy to sign a book of condolences and expressed his 'shock, revulsion, and grief' over the assassination. 42nd vice president of the United States Walter Mondale called Gandhi 'a great leader of a great democracy' and deplored 'this shocking act of violence'. Asian, African and European leaders mourned Gandhi as a great champion of democracy and leader of the Non-Aligned Movement expressed its 'deepest grief' and called the killing a 'terrorist' act. South Korean President Chun Doo-hwan, said Gandhi's death meant the 'loss of a great leader to the whole world.' Yugoslav President Veselin Đuranović, Pakistani President Mohammad Zia ul-Haq, Italian President Sandro Pertini, Pope John Paul II at the Vatican, French President Francois Mitterrand condemned the killing. At the United Nations, the General Assembly paused in its work as shocked delegates mourned the death. Assembly President Paul Lusaka of Zambia postponed a scheduled debate and hastily organized a memorial meeting.",
"title": "Assassination"
},
{
"paragraph_id": 54,
"text": "Gandhi is remembered for her ability to effectively promote Indian foreign policy measures.",
"title": "Foreign relations"
},
{
"paragraph_id": 55,
"text": "In early 1971, disputed elections in Pakistan led then East Pakistan to declare independence as Bangladesh. Repression and violence by the Pakistani army led to 10 million refugees crossing the border into India over the following months. Finally, in December 1971, Gandhi intervened directly in the conflict to liberate Bangladesh. India emerged victorious following the war with Pakistan to become the dominant power of South Asia. India had signed a treaty with the Soviet Union promising mutual assistance in the case of war, while Pakistan received active support from the United States during the conflict. U.S. President Richard Nixon disliked Gandhi personally, referring to her as a \"bitch\" and a \"clever fox\" in his private communication with Secretary of State Henry Kissinger. Nixon later wrote of the war: \"[Gandhi] suckered [America]. Suckered us ... this woman suckered us.\" Relations with the U.S. became distant as Gandhi developed closer ties with the Soviet Union after the war. The latter grew to become India's largest trading partner and its biggest arms supplier for much of Gandhi's premiership. India's new hegemonic position, as articulated under the \"Indira Doctrine\", led to attempts to bring the Himalayan states under India's sphere of influence. Nepal and Bhutan remained aligned with India, while in 1975, after years of campaigning, Sikkim voted to join India in a referendum.",
"title": "Foreign relations"
},
{
"paragraph_id": 56,
"text": "India maintained close ties with neighbouring Bangladesh (formerly East Pakistan) following the Liberation War. Prime Minister Sheikh Mujibur Rahman recognised Gandhi's contributions to the independence of Bangladesh. However, Mujibur Rahman's pro-India policies antagonised many in Bangladeshi politics and the military, which feared that Bangladesh had become a client state of India. The Assassination of Mujibur Rahman in 1975 led to the establishment of Islamist military regimes that sought to distance the country from India. Gandhi's relationship with the military regimes was strained because of her alleged support of anti-Islamist leftist guerrilla forces in Bangladesh. Generally, however, there was a rapprochement between Gandhi and the Bangladeshi regimes, although issues such as border disputes and the Farakka Dam remained an irritant to bilateral ties. In 2011, the Government of Bangladesh conferred its highest state award for non-nationals, the Bangladesh Freedom Honour posthumously on Gandhi for her \"outstanding contribution\" to the country's independence.",
"title": "Foreign relations"
},
{
"paragraph_id": 57,
"text": "Gandhi's approach to dealing with Sri Lanka's ethnic problems was initially accommodating. She enjoyed cordial relations with Prime Minister Sirimavo Bandaranaike. In 1974, India ceded the tiny islet of Katchatheevu to Sri Lanka to save Bandaranaike's socialist government from a political disaster. However, relations soured over Sri Lanka's movement away from socialism under J. R. Jayewardene, whom Gandhi despised as a \"western puppet\". India under Gandhi was alleged to have supported the Liberation Tigers of Tamil Eelam (LTTE) militants in the 1980s to put pressure on Jayewardene to abide by Indian interests. Nevertheless, Gandhi rejected demands to invade Sri Lanka in the aftermath of Black July 1983, an anti-Tamil pogrom carried out by Sinhalese mobs. Gandhi made a statement emphasising that she stood for the territorial integrity of Sri Lanka, although she also stated that India cannot \"remain a silent spectator to any injustice done to the Tamil community.\"",
"title": "Foreign relations"
},
{
"paragraph_id": 58,
"text": "India's relationship with Pakistan remained strained following the Shimla Accord in 1972. Gandhi's authorisation of the detonation of a nuclear device at Pokhran in 1974 was viewed by Pakistani leader Zulfikar Ali Bhutto as an attempt to intimidate Pakistan into accepting India's hegemony in the subcontinent. However, in May 1976, Gandhi and Bhutto both agreed to reopen diplomatic establishments and normalise relations. Following the rise to power of General Muhammad Zia-ul-Haq in Pakistan in 1978, India's relations with its neighbour reached a nadir. Gandhi accused General Zia of supporting Khalistani militants in Punjab. Military hostilities recommenced in 1984 following Gandhi's authorisation of Operation Meghdoot. India was victorious in the resulting Siachen conflict against Pakistan.",
"title": "Foreign relations"
},
{
"paragraph_id": 59,
"text": "In order to keep the Soviet Union and the United States out of South Asia, Gandhi was instrumental in establishing the South Asian Association for Regional Cooperation (SAARC) in 1983",
"title": "Foreign relations"
},
{
"paragraph_id": 60,
"text": "Gandhi remained a staunch supporter of the Palestinians in the Arab–Israeli conflict and was critical of the Middle East diplomacy sponsored by the United States. Israel was viewed as a religious state, and thus an analogue to India's archrival Pakistan. Indian diplomats hoped to win Arab support in countering Pakistan in Kashmir. Nevertheless, Gandhi authorised the development of a secret channel of contact and security assistance with Israel in the late 1960s. Her lieutenant, P. V. Narasimha Rao, later became prime minister and approved full diplomatic ties with Israel in 1992.",
"title": "Foreign relations"
},
{
"paragraph_id": 61,
"text": "India's pro-Arab policy had mixed success. Establishment of close ties with the socialist and secular Baathist regimes to some extent neutralised Pakistani propaganda against India. However, the Indo-Pakistani War of 1971 presented a dilemma for the Arab and Muslim states of the Middle East as the war was fought by two states both friendly to the Arabs. The progressive Arab regimes in Egypt, Syria, and Algeria chose to remain neutral, while the conservative pro-American Arab monarchies in Jordan, Saudi Arabia, Kuwait, and United Arab Emirates openly supported Pakistan. Egypt's stance was met with dismay by the Indians, who had come to expect close co-operation with the Baathist regimes. But, the death of Nasser in 1970 and Sadat's growing friendship with Riyadh, and his mounting differences with Moscow, constrained Egypt to a policy of neutrality. Gandhi's overtures to Muammar Gaddafi were rebuffed. Libya agreed with the Arab monarchies in believing that Gandhi's intervention in East Pakistan was an attack against Islam.",
"title": "Foreign relations"
},
{
"paragraph_id": 62,
"text": "The 1971 war became a temporary stumbling block in growing Indo-Iranian ties. Although Iran had earlier characterized the Indo-Pakistani war in 1965 as Indian aggression, the Shah had launched an effort at rapprochement with India in 1969 as part of his effort to secure support for a larger Iranian role in the Persian Gulf. Gandhi's tilt towards Moscow and her dismemberment of Pakistan was perceived by the Shah as part of a larger anti-Iran conspiracy involving India, Iraq, and the Soviet Union. Nevertheless, Iran had resisted Pakistani pressure to activate the Baghdad Pact and draw the Central Treaty Organisation (CENTO) into the conflict. Gradually, Indian and Iranian disillusionment with their respective regional allies led to a renewed partnership between the nations. Gandhi was unhappy with the lack of support from India's Arab allies during the war with Pakistan, while the Shah was apprehensive at the growing friendship between Pakistan and Arab states of the Persian Gulf, especially Saudi Arabia, and the growing influence of Islam in Pakistani society. There was an increase in Indian economic and military co-operation with Iran during the 1970s. The 1974 India-Iranian agreement led to Iran supplying nearly 75 percent of India's crude oil demands. Gandhi appreciated the Shah's disregard of Pan-Islamism in diplomacy.",
"title": "Foreign relations"
},
{
"paragraph_id": 63,
"text": "One of the major developments in Southeast Asia during Gandhi's premiership was the formation of the Association of Southeast Asian Nations (ASEAN) in 1967. Relations between ASEAN and India were mutually antagonistic. India perceived ASEAN to be linked to the Southeast Asia Treaty Organization (SEATO) and, therefore, it was seen as a pro-American organisation. On their part, the ASEAN nations were unhappy with Gandhi's sympathy for the Viet Cong and India's strong links with the USSR. Furthermore, they were also apprehensions in the region about Gandhi's plans, particularly after India played a big role in breaking up Pakistan and facilitating the emergence of Bangladesh as a sovereign country in 1971. India's entry into the nuclear weapons club in 1974 also contributed to tensions in Southeast Asia. Relations only began to improve following Gandhi's endorsement of the ZOPFAN declaration and the disintegration of the SEATO alliance in the aftermath of Pakistani and American defeats in the region. Nevertheless, Gandhi's close relations with reunified Vietnam and her decision to recognize the Vietnam-installed Government of Cambodia in 1980 meant that India and ASEAN were unable to develop a viable partnership.",
"title": "Foreign relations"
},
{
"paragraph_id": 64,
"text": "On 26 September 1981, Gandhi was conferred with the honorary degree of Doctor at the Laucala Graduation at the University of the South Pacific in Fiji.",
"title": "Foreign relations"
},
{
"paragraph_id": 65,
"text": "Although independent India was initially viewed as a champion of various African independence movements, its cordial relationship with the Commonwealth of Nations and its liberal views of British policies in East Africa had harmed its image as a staunch supporter of various independence movements in the third world. Indian condemnation of militant struggles in Kenya and Algeria was in sharp contrast to China, who had supported armed struggle to win African independence. After reaching a high diplomatic point in the aftermath of Nehru's role in the Suez Crisis, India's isolation from Africa was almost complete when only four nations—Ethiopia, Kenya, Nigeria, and Libya—supported her during the Sino-Indian War in 1962. After Gandhi became prime minister, diplomatic and economic relations with the states which had sided with India during the Sino-Indian War were expanded. Gandhi began negotiations with the Kenyan government to establish the Africa-India Development Cooperation. The Indian government also started considering the possibility of bringing Indians settled in Africa within the framework of its policy goals to help recover its declining geo-strategic influence. Gandhi declared the people of Indian origin settled in Africa as \"Ambassadors of India\". Efforts to rope in the Asian community to join Indian diplomacy, however, came to naught, in part because of the unwillingness of Indians to remain in politically insecure surroundings, and because of the exodus of African Indians to Britain with the passing of the Commonwealth Immigrants Act in 1968. In Uganda, the African Indian community suffered persecution and eventually expulsion under the government of Idi Amin.",
"title": "Foreign relations"
},
{
"paragraph_id": 66,
"text": "Foreign and domestic policy successes in the 1970s enabled Gandhi to rebuild India's image in the eyes of African states. Victory over Pakistan and India's possession of nuclear weapons showed the degree of India's progress. Furthermore, the conclusion of the Indo-Soviet treaty in 1971, and threatening gestures by the United States, to send its nuclear-armed Task Force 74 into the Bay of Bengal at the height of the East Pakistan crisis had enabled India to regain its anti-imperialist image. Gandhi firmly tied Indian anti-imperialist interests in Africa to those of the Soviet Union. Unlike Nehru, she openly and enthusiastically supported liberation struggles in Africa. At the same time, Chinese influence in Africa had declined owing to its incessant quarrels with the Soviet Union. These developments permanently halted India's decline in Africa and helped to reestablish its geo-strategic presence.",
"title": "Foreign relations"
},
{
"paragraph_id": 67,
"text": "The Commonwealth is a voluntary association of mainly former British colonies. India maintained cordial relations with most of the members during Gandhi's time in power. In the 1980s, she, along with Canadian prime minister Pierre Trudeau, Zambia's president Kenneth Kaunda, Australian prime minister Malcolm Fraser and Singapore prime minister Lee Kuan Yew was regarded as one of the pillars of the Commonwealth. India under Gandhi also hosted the 1983 Commonwealth Heads of Government summit in New Delhi. Gandhi used these meetings as a forum to put pressure on member countries to cut economic, sports, and cultural ties with apartheid South Africa.",
"title": "Foreign relations"
},
{
"paragraph_id": 68,
"text": "In the early 1980s under Gandhi, India attempted to reassert its prominent role in the Non-Aligned Movement by focusing on the relationship between disarmament and economic development. By appealing to the economic grievances of developing countries, Gandhi and her successors exercised a moderating influence on the Non-aligned movement, diverting it from some of the Cold War issues that marred the controversial 1979 Havana meeting where Cuban leader Fidel Castro attempted to steer the movement towards the Soviet Union. Although hosting the 1983 summit at Delhi boosted Indian prestige within the movement, its close relations with the Soviet Union and its pro-Soviet positions on Afghanistan and Cambodia limited its influence.",
"title": "Foreign relations"
},
{
"paragraph_id": 69,
"text": "Gandhi spent a number of years in Europe during her youth and had formed many friendships there. During her premiership she formed friendships with many leaders such as West German chancellor, Willy Brandt and Austrian chancellor Bruno Kreisky. She also enjoyed a close working relationship with many British leaders including conservative premiers, Edward Heath and Margaret Thatcher.",
"title": "Foreign relations"
},
{
"paragraph_id": 70,
"text": "The relationship between India and the Soviet Union deepened during Gandhi's rule. The main reason was the perceived bias of the United States and China, rivals of the USSR, towards Pakistan. The support of the Soviets with arms supplies and the casting of a veto at the United Nations helped in winning and consolidating the victory over Pakistan in the 1971 Bangladesh liberation war. Before the war, Gandhi signed a treaty of friendship with the Soviets. They were unhappy with the 1974 nuclear test conducted by India but did not support further action because of the ensuing Cold War with the United States. Gandhi was unhappy with the Soviet invasion of Afghanistan, but once again calculations involving relations with Pakistan and China kept her from criticising the Soviet Union harshly. The Soviets became the main arms supplier during the Gandhi years by offering cheap credit and transactions in rupees rather than in dollars. The easy trade deals also applied to non-military goods. Under Gandhi, by the early 1980s, the Soviets had become India's largest trading partner.",
"title": "Foreign relations"
},
{
"paragraph_id": 71,
"text": "Soviet intelligence was involved in India during Indira Gandhi's administration, sometimes at Gandhi's expense. In the prelude to Operation Blue Star, by 1981, the Soviets had launched Operation Kontakt, which was based on a forged document purporting to contain details of the weapons and money provided by the ISI to Sikh militants who wanted to create an independent country. In November 1982, Yuri Andropov, the General Secretary of the Communist Party and leader of the Soviet Union, approved a proposal to fabricate Pakistani intelligence documents detailing ISI plans to foment religious disturbances in Punjab and promote the creation of Khalistan as an independent Sikh state. Indira Gandhi's decision to move troops into the Punjab was based on her taking seriously the information provided by the Soviets regarding secret CIA support for the Sikhs.",
"title": "Foreign relations"
},
{
"paragraph_id": 72,
"text": "According to the Mitrokhin Archive, the Soviets used a new recruit in the New Delhi residency named \"Agent S\" who was close to Indira Gandhi as a major channel for providing her disinformation. Agent S provided Indira Gandhi with false documents purporting to show Pakistani involvement in the Khalistan conspiracy. The KGB became confident that it could continue to deceive Indira Gandhi indefinitely with fabricated reports of CIA and Pakistani conspiracies against her. The Soviets persuaded Rajiv Gandhi during a visit to Moscow in 1983 that the CIA was engaged in subversion in the Punjab. When Rajiv Gandhi returned to India, he declared this to be true. The KGB was responsible for Indira Gandhi exaggerating the threats posed by both the CIA and Pakistan. This KGB role in facilitating Operation Bluestar was acknowledged by Subramanian Swamy who stated in 1992 \"The 1984 Operation Bluestar became necessary because of the vast disinformation against Sant Bhindranwale by the KGB, and repeated inside Parliament by the Congress Party of India.\"",
"title": "Foreign relations"
},
{
"paragraph_id": 73,
"text": "A report following the Mitrokhin archive also caused some historiographical controversy about Indira Gandhi. In India, a senior leader of the Bharatiya Janata Party, L. K. Advani, requested of the Government a white paper on the role of foreign intelligence agencies and a judicial enquiry on the allegations. The spokesperson of the Indian Congress party referred to the book as \"pure sensationalism not even remotely based on facts or records\" and pointed out that the book is not based on official records from the Soviet Union. L.K Advani raised his voice because in this book is written about ex-prime minister Indira Gandhi (Codenamed VANO) relations with KGB. KGB's direct link to Prime Minister of India, Indira Gandhi (code-named Vano) was alleged. \"Suitcases full of banknotes were said to be routinely taken to the Prime Minister's house. Former Syndicate member S. K. Patil is reported to have said that Mrs. Gandhi did not even return the suitcases\". An extensive footprint in the Indian media was also described- \"According to KGB files, by 1973 it had ten Indian newspapers on its payroll (which cannot be identified for legal reasons) as well as a press agency under its control. During 1972 the KGB claimed to have planted 3,789 articles in Indian newspapers – probably more than in any other country in the non-Communist world.\" According to its files, the number fell to 2,760 in 1973 but rose to 4,486 in 1974 and 5,510 in 1975. Mitrokhin estimated that in some major NATO countries, despite active-measures campaigns, the KGB was able to plant little more than 1 per cent of the articles which it placed in the Indian press.\"",
"title": "Foreign relations"
},
{
"paragraph_id": 74,
"text": "When Gandhi came to power in 1966, Lyndon Johnson was the US president. At the time, India was reliant on the US for food aid. Gandhi resented the US policy of food aid being used as a tool to force India to adopt policies favoured by the US. She also resolutely refused to sign the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Relations with the US were strained badly under President Richard Nixon and his favouring of Pakistan during the Bangladesh liberation war. Nixon despised Gandhi politically and personally. In 1981, Gandhi met President Ronald Reagan for the first time at the North–South Summit held to discuss global poverty. She had been described to him as an 'Ogre', but he found her charming and easy to work with and they formed a close working relationship during her premiership in the 1980s.",
"title": "Foreign relations"
},
{
"paragraph_id": 75,
"text": "Gandhi presided over three Five-Year Plans as prime minister, two of which succeeded in meeting their targeted growth.",
"title": "Economic policy"
},
{
"paragraph_id": 76,
"text": "There is considerable debate whether Gandhi was a socialist on principle or out of political expediency. Sunanda K. Datta-Ray described her as \"a master of rhetoric ... often more posture than policy\", while The Times journalist, Peter Hazelhurst, famously quipped that Gandhi's socialism was \"slightly left of self-interest.\" Critics have focused on the contradictions in the evolution of her stance towards communism. Gandhi was known for her anti-communist stance in the 1950s, with Meghnad Desai even describing her as \"the scourge of [India's] Communist Party.\" Yet, she later forged close relations with Indian communists even while using the army to break the Naxalites. In this context, Gandhi was accused of formulating populist policies to suit her political needs. She was seemingly against the rich and big business while preserving the status quo to manipulate the support of the left in times of political insecurity, such as the late 1960s. Although in time Gandhi came to be viewed as the scourge of the right-wing and reactionary political elements of India, leftist opposition to her policies emerged. As early as 1969, critics had begun accusing her of insincerity and machiavellianism. The Indian Libertarian wrote that: \"it would be difficult to find a more machiavellian leftist than Mrs Indira Gandhi ... for here is Machiavelli at its best in the person of a suave, charming and astute politician.\" J. Barkley Rosser Jr. wrote that \"some have even seen the declaration of emergency rule in 1975 as a move to suppress [leftist] dissent against Gandhi's policy shift to the right.\" In the 1980s, Gandhi was accused of \"betraying socialism\" after the beginning of Operation Forward, an attempt at economic reform. Nevertheless, others were more convinced of Gandhi's sincerity and devotion to socialism. Pankaj Vohra noted that \"even the late prime minister's critics would concede that the maximum number of legislations of social significance was brought about during her tenure ... [and that] she lives in the hearts of millions of Indians who shared her concern for the poor and weaker sections and who supported her politics.\"",
"title": "Economic policy"
},
{
"paragraph_id": 77,
"text": "In summarising the biographical works on Gandhi, Blema S. Steinberg concludes she was decidedly non-ideological. Only 7.4% (24) of the total 330 biographical extractions posit ideology as a reason for her policy choices. Steinberg notes Gandhi's association with socialism was superficial. She had only a general and traditional commitment to the ideology by way of her political and family ties. Gandhi personally had a fuzzy concept of socialism. In one of the early interviews she gave as prime minister, Gandhi had ruminated: \"I suppose you could call me a socialist, but you have understand what we mean by that term ... we used the word [socialism] because it came closest to what we wanted to do here – which is to eradicate poverty. You can call it socialism; but if by using that word we arouse controversy, I don't see why we should use it. I don't believe in words at all.\" Regardless of the debate over her ideology or lack thereof, Gandhi remains a left-wing icon. She has been described by Hindustan Times columnist, Pankaj Vohra, as \"arguably the greatest mass leader of the last century.\" Her campaign slogan, Garibi Hatao ('Remove Poverty'), has become an often used motto of the Indian National Congress Party. To the rural and urban poor, untouchables, minorities and women in India, Gandhi was \"Indira Amma or Mother Indira.\"",
"title": "Economic policy"
},
{
"paragraph_id": 78,
"text": "Gandhi inherited a weak and troubled economy. Fiscal problems associated with the war with Pakistan in 1965, along with a drought-induced food crisis that spawned famines, had plunged India into the sharpest recession since independence. The government responded by taking steps to liberalise the economy and agreeing to the devaluation of the currency in return for the restoration of foreign aid. The economy managed to recover in 1966 and ended up growing at 4.1% over 1966–1969. Much of that growth, however, was offset by the fact that the external aid promised by the United States government and the International Bank for Reconstruction and Development (IBRD), meant to ease the short-run costs of adjustment to a liberalised economy, never materialised. American policy makers had complained of continued restrictions imposed on the economy. At the same time, Indo-US relations were strained because of Gandhi's criticism of the American bombing campaign in Vietnam. While it was thought at the time, and for decades after, that President Johnson's policy of withholding food grain shipments was to coerce Indian support for the war, in fact, it was to offer India rainmaking technology that he wanted to use as a counterweight to China's possession of the atomic bomb. In light of the circumstances, liberalisation became politically suspect and was soon abandoned. Grain diplomacy and currency devaluation became matters of intense national pride in India. After the bitter experience with Johnson, Gandhi decided not to request food aid in the future. Moreover, her government resolved never again to become \"so vulnerably dependent\" on aid, and painstakingly began building up substantial foreign exchange reserves. When food stocks slumped after poor harvests in 1972, the government made it a point to use foreign exchange to buy US wheat commercially rather than seek resumption of food aid.",
"title": "Economic policy"
},
{
"paragraph_id": 79,
"text": "The period of 1967–75 was characterised by socialist ascendency in India, which culminated in 1976 with the official declaration of state socialism. Gandhi not only abandoned the short-lived liberalisation programme but also aggressively expanded the public sector with new licensing requirements and other restrictions for industry. She began a new course by launching the Fourth Five-Year Plan in 1969. The government targeted growth at 5.7% while stating as its goals, \"growth with stability and progressive achievement of self-reliance.\" The rationale behind the overall plan was Gandhi's Ten-Point Programme of 1967. This had been her first economic policy formulation, six months after coming to office. The programme emphasised greater state control of the economy with the understanding that government control assured greater welfare than private control. Related to this point were a set of policies that were meant to regulate the private sector. By the end of the 1960s, the reversal of the liberalisation process was complete, and India's policies were characterised as \"protectionist as ever.\"",
"title": "Economic policy"
},
{
"paragraph_id": 80,
"text": "To deal with India's food problems, Gandhi expanded the emphasis on production of inputs to agriculture that had already been initiated by her father, Jawaharlal Nehru. The Green Revolution in India subsequently culminated under her government in the 1970s. It transformed the country from a nation heavily reliant on imported grains, and prone to famine, to one largely able to feed itself, and becoming successful in achieving its goal of food security. Gandhi had a personal motive in pursuing agricultural self-sufficiency, having found India's dependency on the U.S. for shipments of grains humiliating.",
"title": "Economic policy"
},
{
"paragraph_id": 81,
"text": "The economic period of 1967–75 became significant for its major wave of nationalisation amidst increased regulation of the private sector.",
"title": "Economic policy"
},
{
"paragraph_id": 82,
"text": "Some other objectives of the economic plan for the period were to provide for the minimum needs of the community through a rural works program and the removal of the privy purses of the nobility. Both these, and many other goals of the 1967 programme, were accomplished by 1974–75. Nevertheless, the success of the overall economic plan was tempered by the fact that annual growth at 3.3–3.4% over 1969–74 fell short of the targeted figure.",
"title": "Economic policy"
},
{
"paragraph_id": 83,
"text": "The Fifth Five-Year Plan (1974–79) was enacted against the backdrop of the state of emergency and the Twenty Point Program of 1975. It was the economic rationale of the emergency, a political act that has often been justified on economic grounds. In contrast to the reception of Gandhi's earlier economic plan, this one was criticised for being a \"hastily thrown together wish list.\" Gandhi promised to reduce poverty by targeting the consumption levels of the poor and enact wide-ranging social and economic reforms. In addition, the government targeted an annual growth rate of 4.4% over the period of the plan.",
"title": "Economic policy"
},
{
"paragraph_id": 84,
"text": "The measures of the emergency regime was able to halt the economic trouble of the early to mid-1970s, which had been marred by harvest failures, fiscal contraction, and the breakdown of the Bretton Woods system of fixed exchanged rates. The resulting turbulence in the foreign exchange markets was accentuated further by the oil shock of 1973. The government was able to exceed the targeted growth figure with an annual growth rate of 5.0–5.2% over the five-year period of the plan (1974–79). The economy grew at the rate of 9% in 1975–76 alone, and the Fifth Plan, became the first plan during which the per capita income of the economy grew by over 5%.",
"title": "Economic policy"
},
{
"paragraph_id": 85,
"text": "Gandhi inherited a weak economy when she became prime minister again in 1980. The preceding year—1979–80—under the Janata Party government saw the strongest recession (−5.2%) in the history of modern India with inflation rampant at 18.2%. Gandhi proceeded to abrogate the Janata Party government's Five-Year Plan in 1980 and launched the Sixth Five-Year Plan (1980–85). Her government targeted an average growth rate of 5.2% over the period of the plan. Measures to check inflation were also taken; by the early 1980s it was under control at an annual rate of about 5%.",
"title": "Economic policy"
},
{
"paragraph_id": 86,
"text": "Although Gandhi continued professing socialist beliefs, the Sixth Five-Year Plan was markedly different from the years of Garibi Hatao. Populist programmes and policies were replaced by pragmatism. There was an emphasis on tightening public expenditures, greater efficiency of the state-owned enterprises (SOE), which Gandhi qualified as a \"sad thing\", and on stimulating the private sector through deregulation and liberation of the capital market. The government subsequently launched Operation Forward in 1982, the first cautious attempt at reform. The Sixth Plan went on to become the most successful of the Five-Year Plans yet; showing an average growth rate of 5.7% over 1980–85.",
"title": "Economic policy"
},
{
"paragraph_id": 87,
"text": "During Lal Bahadur Shastri's last full year in office (1965), inflation averaged 7.7%, compared to 5.2% at the end of Gandhi's first term in office (1977). On average, inflation in India had remained below 7% through the 1950s and 1960s. It then accelerated sharply in the 1970s, from 5.5% in 1970–71 to over 20% by 1973–74, due to the international oil crisis. Gandhi declared inflation the gravest of problems in 1974 (at 25.2%) and devised a severe anti-inflation program. The government was successful in bringing down inflation during the emergency; achieving negative figures of −1.1% by the end of 1975–76.",
"title": "Economic policy"
},
{
"paragraph_id": 88,
"text": "Gandhi inherited a tattered economy in her second term; harvest failures and a second oil shock in the late 1970s had caused inflation to rise again. During Charan Singh's short time in office in the second half of 1979, inflation averaged 18.2%, compared to 6.5% during Gandhi's last year in office (1984). General economic recovery under Gandhi led to an average inflation rate of 6.5% from 1981–82 to 1985–86—the lowest since the beginning of India's inflation problems in the 1960s.",
"title": "Economic policy"
},
{
"paragraph_id": 89,
"text": "The unemployment rate remained constant at 9% over a nine-year period (1971–80) before declining to 8.3% in 1983.",
"title": "Economic policy"
},
{
"paragraph_id": 90,
"text": "Despite the provisions, control and regulations of the Reserve Bank of India, most banks in India had continued to be owned and operated by private persons. Businessmen who owned the banks were often accused of channeling the deposits into their own companies and ignoring priority sector lending. Furthermore, there was a great resentment against class banking in India, which had left the poor (the majority of the population) unbanked. After becoming prime minister, Gandhi expressed her intention of nationalising the banks to alleviate poverty in a paper titled, \"Stray thoughts on Bank Nationalisation\". The paper received overwhelming public support. In 1969, Gandhi moved to nationalise fourteen major commercial banks. After this, public sector bank branch deposits increased by approximately 800 percent; advances took a huge jump by 11,000 percent. Nationalisation also resulted in significant growth in the geographic coverage of banks; the number of bank branches rose from 8,200 to over 62,000, most of which were opened in unbanked, rural areas. The nationalisation drive not only helped to increase household savings, but it also provided considerable investments in the informal sector, in small- and medium-sized enterprises, and in agriculture, and contributed significantly to regional development and to the expansion of India's industrial and agricultural base. Jayaprakash Narayan, who became famous for leading the opposition to Gandhi in the 1970s, solidly praised her nationalisation of banks.",
"title": "Domestic policy"
},
{
"paragraph_id": 91,
"text": "Having been re-elected in 1971 on a nationalisation platform, Gandhi proceeded to nationalise the coal, steel, copper, refining, cotton textiles, and insurance industries. Most of this was done to protect employment and the interests of organised labour. The remaining private sector industries were placed under strict regulatory control.",
"title": "Domestic policy"
},
{
"paragraph_id": 92,
"text": "During the Indo-Pakistani War of 1971, foreign-owned private oil companies had refused to supply fuel to the Indian Navy and the Indian Air Force. In response, Gandhi nationalised some oil companies in 1973. However, major nationalisations also occurred in 1974 and 1976, forming the oil majors. After nationalisation, the oil majors such as the Indian Oil Corporation (IOC), the Hindustan Petroleum Corporation (HPCL) and the Bharat Petroleum Corporation (BPCL) had to keep a minimum stock level of oil, to be supplied to the military when needed.",
"title": "Domestic policy"
},
{
"paragraph_id": 93,
"text": "In 1966, Gandhi accepted the demands of the Akalis to reorganise Punjab on linguistic lines. The Hindi-speaking southern half of Punjab became a separate state, Haryana, while the Pahari speaking hilly areas in the northeast were joined to Himachal Pradesh. By doing this she had hoped to ward off the growing political conflict between Hindu and Sikh groups in the region. However, a contentious issue that was considered unresolved by the Akalis was the status of Chandigarh, a prosperous city on the Punjab-Haryana border, which Gandhi declared a union territory to be shared as a capital by both the states.",
"title": "Domestic policy"
},
{
"paragraph_id": 94,
"text": "Victory over Pakistan in 1971 consolidated Indian power in Kashmir. Gandhi indicated that she would make no major concessions on Kashmir. The most prominent of the Kashmiri separatists, Sheikh Abdullah, had to recognise India's control over Kashmir in light of the new order in South Asia. The situation was normalised in the years following the war after Abdullah agreed to an accord with Gandhi, by giving up the demand for a plebiscite in return for a special autonomous status for Kashmir. In 1975, Gandhi declared the state of Jammu and Kashmir as a constituent unit of India. The Kashmir conflict remained largely peaceful if frozen under Gandhi's premiership.",
"title": "Domestic policy"
},
{
"paragraph_id": 95,
"text": "In 1972, Gandhi granted statehood to Meghalaya, Manipur and Tripura, while the North-East Frontier Agency was declared a union territory and renamed Arunachal Pradesh. The transition to statehood for these territories was successfully overseen by her administration. This was followed by the annexation of Sikkim in 1975.",
"title": "Domestic policy"
},
{
"paragraph_id": 96,
"text": "The principle of equal pay for equal work for both men and women was enshrined in the Indian Constitution under the Gandhi administration.",
"title": "Domestic policy"
},
{
"paragraph_id": 97,
"text": "Gandhi questioned the continued existence of a privy purse for former rulers of princely states. She argued the case for abolition based on equal rights for all citizens and the need to reduce the government's revenue deficit. The nobility responded by rallying around the Jana Sangh and other right-wing parties that stood in opposition to Gandhi's attempts to abolish royal privileges. The motion to abolish privy purses, and the official recognition of the titles, was originally brought before the Parliament in 1970. It was passed in the Lok Sabha but fell short of the two-thirds majority in the Rajya Sabha by a single vote. Gandhi responded by having a Presidential proclamation issued; de-recognising the princes; with this withdrawal of recognition, their claims to privy purses were also legally lost. However, the proclamation was struck down by the Supreme Court of India. In 1971, Gandhi again motioned to abolish the privy purse. This time, it was passed successfully as the 26th Amendment to the Constitution of India.",
"title": "Domestic policy"
},
{
"paragraph_id": 98,
"text": "Gandhi claimed that only \"clear vision, iron will and the strictest discipline\" can remove poverty. She justified the imposition of the state of emergency in 1975 in the name of the socialist mission of the Congress. Armed with the power to rule by decree and without constitutional constraints, Gandhi embarked on a massive redistribution program. The provisions included rapid enforcement of land ceilings, housing for landless labourers, the abolition of bonded labour and a moratorium on the debts of the poor. North India was at the centre of the reforms. Millions of hectares of land were acquired and redistributed. The government was also successful in procuring houses for landless labourers; According to Francine Frankel, three-fourths of the targeted four million houses was achieved in 1975 alone. Nevertheless, others have disputed the success of the program and criticised Gandhi for not doing enough to reform land ownership. The political economist, Jyotindra Das Gupta, cryptically questioned \"...whether or not the real supporters of land-holders were in jail or in power?\" Critics also accused Gandhi of choosing to \"talk left and act right\", referring to her concurrent pro-business decisions and endeavours. J. Barkley Rosser Jr. wrote that \"some have even seen the declaration of emergency rule in 1975 as a move to suppress dissent against Gandhi's policy shift to the right.\" Regardless of the controversy over the nature of the reforms, the long-term effects of the social changes gave rise to the prominence of middle-ranking farmers from intermediate and lower castes in North India. The rise of these newly empowered social classes challenged the political establishment of the Hindi Belt in the years to come.",
"title": "Domestic policy"
},
{
"paragraph_id": 99,
"text": "Under the 1950 Constitution of India, Hindi was to have become the official national language by 1965. This was unacceptable to many non-Hindi-speaking states, which wanted the continued use of English in government. In 1967, Gandhi introduced a constitutional amendment that guaranteed the de facto use of both Hindi and English as official languages. This established the official government policy of bilingualism in India and satisfied the non-Hindi-speaking Indian states. Gandhi thus put herself forward as a leader with a pan-Indian vision. Nevertheless, critics alleged that her stance was actually meant to weaken the position of rival Congress leaders from the northern states such as Uttar Pradesh, where there had been strong, sometimes violent, pro-Hindi agitations. Gandhi came out of the language conflicts with the strong support of the south Indian populace.",
"title": "Domestic policy"
},
{
"paragraph_id": 100,
"text": "In the late 1960s and 1970s, Gandhi had the Indian army crush militant Communist uprisings in the Indian state of West Bengal. The communist insurgency in India was completely suppressed during the state of emergency.",
"title": "Domestic policy"
},
{
"paragraph_id": 101,
"text": "Gandhi considered the north-eastern region important, because of its strategic situation. In 1966, the Mizo uprising took place against the government of India and overran almost the whole of the Mizoram region. Gandhi ordered the Indian Army to launch massive retaliatory strikes in response. The rebellion was suppressed with the Indian Air Force carrying out airstrikes in Aizawl; this remains the only instance of India carrying out an airstrike in its own territory. The defeat of Pakistan in 1971 and the secession of East Pakistan as pro-India Bangladesh led to the collapse of the Mizo separatist movement. In 1972, after the less extremist Mizo leaders came to the negotiating table, Gandhi upgraded Mizoram to the status of a union territory. A small-scale insurgency by some militants continued into the late 1970s, but it was successfully dealt with by the government. The Mizo conflict was resolved definitively during the administration of Gandhi's son Rajiv. Today, Mizoram is considered one of the most peaceful states in the north-east.",
"title": "Domestic policy"
},
{
"paragraph_id": 102,
"text": "Responding to the insurgency in Nagaland, Gandhi \"unleashed a powerful military offensive\" in the 1970s. Finally, a massive crackdown on the insurgents took place during the state of emergency ordered by Gandhi. The insurgents soon agreed to surrender and signed the Shillong Accord in 1975. While the agreement was considered a victory for the Indian government and ended large-scale conflicts, there have since been spurts of violence by rebel holdouts and ethnic conflict amongst the tribes.",
"title": "Domestic policy"
},
{
"paragraph_id": 103,
"text": "Gandhi contributed to, and carried out further, the vision of Jawaharlal Nehru, former premier of India, to develop its nuclear program. Gandhi authorised the development of nuclear weapons in 1967, in response to Test No. 6 by the People's Republic of China. Gandhi saw this test as Chinese nuclear intimidation and promoted Nehru's views to establish India's stability and security interests independent from those of the nuclear superpowers.",
"title": "Domestic policy"
},
{
"paragraph_id": 104,
"text": "The programme became fully mature in 1974, when Dr. Raja Ramanna reported to Gandhi that India had the ability to test its first nuclear weapon. Gandhi gave verbal authorisation for this test, and preparations were made in the Indian Army's Pokhran Test Range. In 1974, India successfully conducted an underground nuclear test, unofficially code named \"Smiling Buddha\", near the desert village of Pokhran in Rajasthan. As the world was quiet about this test, a vehement protest came from Pakistan as its prime minister, Zulfikar Ali Bhutto, described the test as \"Indian hegemony\" to intimidate Pakistan. In response to this, Bhutto launched a massive campaign to make Pakistan a nuclear power. Bhutto asked the nation to unite and slogans such as \"hum ghaas aur pattay kha lay gay magar nuclear power ban k rhe gay\" (\"We will eat grass or leaves or even go hungry, but we will get nuclear power\") were employed. Gandhi directed a letter to Bhutto, and later to the world, claiming the test was for peaceful purposes and part of India's commitment to develop its programme for industrial and scientific use.",
"title": "Domestic policy"
},
{
"paragraph_id": 105,
"text": "In spite of intense international criticism and steady decline in foreign investment and trade, the nuclear test was popular domestically. The test caused an immediate revival of Gandhi's popularity, which had flagged considerably from its heights after the 1971 war. The overall popularity and image of the Congress Party was enhanced and the Congress Party was well received in the Indian Parliament.",
"title": "Domestic policy"
},
{
"paragraph_id": 106,
"text": "She married Feroze Gandhi at the age of 25, in 1942. Their marriage lasted 18 years until he died of a heart attack in 1960. They had two sons—Rajiv and Sanjay. Initially, her younger son Sanjay had been her chosen heir, but after his death in a flying accident in June 1980, Gandhi persuaded her reluctant elder son Rajiv to quit his job as a pilot and enter politics in February 1981. Rajiv took office as prime minister following his mother's assassination in 1984; he served until December 1989. Rajiv Gandhi himself was assassinated by a suicide bomber working on behalf of LTTE on 21 May 1991.",
"title": "Personal life"
},
{
"paragraph_id": 107,
"text": "In 1952 in a letter to her American friend Dorothy Norman, Gandhi wrote: \"I am in no sense a feminist, but I believe in women being able to do everything ... Given the opportunity to develop, capable Indian women have come to the top at once.\" While this statement appears paradoxical, it reflects Gandhi's complex feelings toward her gender and feminism. Her egalitarian upbringing with her cousins helped contribute to her sense of natural equality. \"Flying kites, climbing trees, playing marbles with her boy cousins, Indira said she hardly knew the difference between a boy and a girl until the age of twelve.\"",
"title": "Views on women"
},
{
"paragraph_id": 108,
"text": "Gandhi did not often discuss her gender, but she did involve herself in women's issues before becoming the prime minister. Before her election as prime minister, she became active in the organisational wing of the Congress party, working in part in the Women's Department. In 1956, Gandhi had an active role in setting up the Congress Party's Women's Section. Unsurprisingly, a lot of her involvement stemmed from her father. As an only child, Gandhi naturally stepped into the political light. And, as a woman, she naturally helped head the Women's section of the Congress Party. She often tried to organise women to involve themselves in politics. Although rhetorically Gandhi may have attempted to separate her political success from her gender, Gandhi did involve herself in women's organizations. The political parties in India paid substantial attention to Gandhi's gender before she became prime minister, hoping to use her for political gain. Even though men surrounded Gandhi during her upbringing, she still had a female role model as a child. Several books on Gandhi mention her interest in Joan of Arc. In her own accounts through her letters, she wrote to her friend Dorothy Norman, in 1952 she wrote: \"At about eight or nine I was taken to France; Jeanne d'Arc became a great heroine of mine. She was one of the first people I read about with enthusiasm.\" Another historian recounts Indira's comparison of herself to Joan of Arc: \"Indira developed a fascination for Joan of Arc, telling her aunt, 'Someday I am going to lead my people to freedom just as Joan of Arc did'!\" Gandhi's linking of herself to Joan of Arc presents a model for historians to assess Gandhi. As one writer said: \"The Indian people were her children; members of her family were the only people capable of leading them.\"",
"title": "Views on women"
},
{
"paragraph_id": 109,
"text": "Gandhi had been swept up in the call for Indian independence since she was born in 1917. Thus by 1947, she was already well immersed in politics, and by 1966, when she first assumed the position of prime minister, she had held several cabinet positions in her father's office.",
"title": "Views on women"
},
{
"paragraph_id": 110,
"text": "Gandhi's advocacy for women's rights began with her help in establishing the Congress Party's Women's Section. In 1956, she wrote in a letter: \"It is because of this that I am taking a much more active part in politics. I have to do a great deal of touring in order to set up the Congress Party Women's Section, and am on numerous important committees.\" Gandhi spent a great deal of time throughout the 1950s helping to organise women. She wrote to Norman in 1959, irritable that women had organised around the communist cause but had not mobilised for the Indian cause: \"The women, whom I have been trying to organize for years, had always refused to come into politics. Now they are out in the field.\" Once appointed president in 1959, she \"travelled relentlessly, visiting remote parts of the country that had never before received a VIP ... she talked to women, asked about child health and welfare, inquired after the crafts of the region\" Gandhi's actions throughout her ascent to power clearly reflect a desire to mobilise women. Gandhi did not see the purpose of feminism. She saw her own success as a woman, and also noted that: \"Given the opportunity to develop, capable Indian women have come to the top at once.\"",
"title": "Views on women"
},
{
"paragraph_id": 111,
"text": "Gandhi felt guilty about her inability to fully devote her time to her children. She noted that her main problem in office was how to balance her political duties with tending to her children, and \"stressed that motherhood was the most important part of her life.\" At another point, she went into more detail: \"To a woman, motherhood is the highest fulfilment ... To bring a new being into this world, to see its perfection and to dream of its future greatness is the most moving of all experiences and fills one with wonder and exaltation.\"",
"title": "Views on women"
},
{
"paragraph_id": 112,
"text": "Her domestic initiatives did not necessarily reflect favourably on Indian women. Gandhi did not make a special effort to appoint women to cabinet positions. She did not appoint any women to full cabinet rank during her terms in office. Yet despite this, many women saw Gandhi as a symbol for feminism and an image of women's power.",
"title": "Views on women"
},
{
"paragraph_id": 113,
"text": "American veteran politican Henry A. Kissinger had described Indira Gandhi as an \"Iron lady\". After leading India to victory against Pakistan in the Bangladesh Liberation War in 1971, President V. V. Giri awarded Gandhi with India's highest civilian honour, the Bharat Ratna.",
"title": "Legacy"
},
{
"paragraph_id": 114,
"text": "In 2011, the Bangladesh Freedom Honour, Bangladesh's highest civilian award for foreign nationals, was posthumously conferred on Gandhi for her \"outstanding contributions\" to Bangladesh's Liberation War.",
"title": "Legacy"
},
{
"paragraph_id": 115,
"text": "Gandhi's main legacy was standing firm in the face of American pressure to defeat Pakistan and turn East Pakistan into independent Bangladesh. She was also responsible for India joining the group of countries with nuclear weapons. Although India being officially part of the Non-Aligned Movement, she gave Indian foreign policy a tilt towards the Soviet bloc.",
"title": "Legacy"
},
{
"paragraph_id": 116,
"text": "In 1999, Gandhi was named \"Woman of the Millennium\" in an online poll organised by the BBC. In 2012, she was ranked number seven on Outlook India's poll of the Greatest Indian.",
"title": "Legacy"
},
{
"paragraph_id": 117,
"text": "Being at the forefront of Indian politics for decades, Gandhi left a powerful legacy on Indian politics. Similarly, some of her actions have also caused controversies. One of the criticism concerns her rule to have damaged internal party democracy in the Congress party. Her detractors accuse her of weakening State chief ministers and thereby weakening the federal structure, weakening the independence of the judiciary, and weakening her cabinet by vesting power in her secretariat and her sons. Gandhi is also associated with fostering a culture of nepotism in Indian politics and in India's institutions. She is also almost singularly associated with the period of Emergency rule, described by some as a \"dark period\" in Indian democracy. The Forty-second Amendment of the Constitution of India which was adopted during the emergency can also be regarded as part of her legacy. Although judicial challenges and non-Congress governments tried to water down the amendment, the amendment still stands.",
"title": "Legacy"
},
{
"paragraph_id": 118,
"text": "She remains the only woman to occupy the office of the prime minister of India. In 2020, Gandhi was named by Time magazine among the world's 100 powerful women who defined the last century. Shakti Sthal whose name literally translates to place of strength is a monument to her.",
"title": "Legacy"
},
{
"paragraph_id": 119,
"text": "While portrayals of Indira Gandhi by actors in Indian cinema have generally been avoided, with filmmakers using back-shots, silhouettes and voiceovers to give impressions of her character, several films surrounding her tenure, policies or assassination have been made.",
"title": "In popular culture"
},
{
"paragraph_id": 120,
"text": "These include Aandhi (1975) by Gulzar, Kissa Kursi Ka (1975) by Amrit Nahata, Nasbandi (1978) by I. S. Johar, Maachis (1996) by Gulzar, Hazaaron Khwaishein Aisi (2003) by Sudhir Mishra, Hawayein (2003) by Ammtoje Mann, Des Hoyaa Pardes (2004) by Manoj Punj, Kaya Taran (2004) by Sashi Kumar, Amu (2005) by Shonali Bose, Kaum De Heere (2014) by Ravinder Ravi, 47 to 84 (2014) by Rajiv Sharma, Punjab 1984 (2014) by Anurag Singh, The Fourth Direction (2015) by Gurvinder Singh, Dharam Yudh Morcha (2016) by Naresh S. Garg, 31 October (2016) by Shivaji Lotan Patil, Baadshaho (2017) by Milan Luthria, Toofan Singh (2017) by Baghal Singh, Sonchiriya (2019) by Abhishek Chaubey, Shukranu (2020) by Bishnu Dev Halder. Aandhi, Kissa Kursi Ka and Nasbandi are notable for having been released during Gandhi's lifetime and were subject to censorship on exhibition during the Emergency.",
"title": "In popular culture"
},
{
"paragraph_id": 121,
"text": "Indus Valley to Indira Gandhi is a 1970 Indian two-part documentary film by S. Krishnaswamy which traces the history of India from the earliest times of the Indus Valley Civilization to the prime ministership of Indira Gandhi. The Films Division of India produced Our Indira, a 1973 short documentary film directed by S.N.S. Sastry showing the beginning of her first tenure as PM and her speeches from the Stockholm Conference.",
"title": "In popular culture"
},
{
"paragraph_id": 122,
"text": "Pradhanmantri (lit. 'Prime Minister'), a 2013 Indian documentary television series which aired on ABP News and covers the various policies and political tenures of Indian PMs, includes the tenureship of Gandhi in the episodes \"Indira Gandhi Becomes PM\", \"Split in Congress Party\", \"Story before Indo-Pakistani War of 1971\", \"Indo-Pakistani War of 1971 and Birth of Bangladesh\", \"1975–77 State of Emergency in India\", and \"Indira Gandhi back as PM and Operation Blue Star\" with Navni Parihar portraying the role of Gandhi. Parihar also portrays Gandhi in the 2021 Indian film Bhuj: The Pride of India which is based on the 1971 Indo-Pakistani War.",
"title": "In popular culture"
},
{
"paragraph_id": 123,
"text": "The taboo surrounding the depiction of Indira Gandhi in Indian cinema has begun to dissipate in recent years with actors portraying her in films. Notable portrayals include: Sarita Choudhury in Midnight's Children (2012); Mandeep Kohli in Jai Jawaan Jai Kisaan (2015); Supriya Vinod in Indu Sarkar (2017), NTR: Kathanayakudu/NTR: Mahanayakudu (2019) and Yashwantrao Chavan – Bakhar Eka Vaadalaachi (2014); Flora Jacob in Raid (2018), Thalaivi (2021) and Radhe Shyam (2022), Kishori Shahane in PM Narendra Modi (2019), Avantika Akerkar in Thackeray (2019) and 83 (2021), Supriya Karnik in Main Mulayam Singh Yadav (2021), Lara Dutta in Bell Bottom(2021) and Fatima Sana Shaikh in Sam Bahadur (film)",
"title": "In popular culture"
},
{
"paragraph_id": 124,
"text": "Book written by Indira Gandhi",
"title": "Bibliography"
},
{
"paragraph_id": 125,
"text": "Books on Indira Gandhi",
"title": "Bibliography"
}
]
| Indira Priyadarshini Gandhi was an Indian politician and stateswoman who served as the 3rd Prime Minister of India from 1966 to 1977 and again from 1980 until her assassination in 1984. She was India's first and, to date, only female prime minister, and a central figure in Indian politics as the leader of the Indian National Congress. Gandhi was the daughter of Jawaharlal Nehru, the first prime minister of India, and the mother of Rajiv Gandhi, who succeeded her in office as the country's sixth prime minister. Furthermore, Gandhi's cumulative tenure of 15 years and 350 days makes her the second-longest-serving Indian prime minister after her father. Henry Kissinger described her as an "Iron Lady", a nickname that became associated with her tough personality since her lifetime. During Nehru's premiership from 1947 to 1964, Gandhi served as his hostess and accompanied him on his numerous foreign trips. In 1959, she played a part in the dissolution of the communist-led Kerala state government as then-president of the Indian National Congress, otherwise a ceremonial position to which she was elected earlier that year. Lal Bahadur Shastri, who had succeeded Nehru as prime minister upon his death in 1964, appointed her minister of information and broadcasting in his government; the same year she was elected to the Rajya Sabha, the upper house of the Indian Parliament. On Shastri's sudden death in January 1966, Gandhi defeated her rival, Morarji Desai, in the Congress Party's parliamentary leadership election to become leader and also succeeded Shastri as prime minister. She led the Congress to victory in two subsequent elections, starting with the 1967 general election, in which she was first elected to the lower house of the Indian parliament, the Lok Sabha. In 1971, the Congress Party headed by Gandhi managed to secure its first landslide victory since her father's sweep in 1962, focusing on issues such as poverty. But following the nationwide Emergency implemented by her, she faced massive anti-incumbency and lost the 1977 general election, the first time for the Congress party to do so. Gandhi was ousted from office and even lost her seat in parliament in the election. Nevertheless, her faction of the Congress Party won the next general election by a landslide, due to Gandhi's leadership and weak governance of the Janata Party rule, the first non-Congress government in independent modern India's history. As prime minister, Gandhi was known for her political intransigence and unprecedented centralization of power. In 1967, she headed a military conflict with China in which India successfully repelled Chinese incursions in the Himalayas. In 1971, she went to war with Pakistan in support of the independence movement and war of independence in East Pakistan, which resulted in an Indian victory and the creation of Bangladesh, as well as increasing India's influence to the point where it became the sole regional power in South Asia. Gandhi's rule saw India grow closer to the Soviet Union by signing a friendship treaty in 1971, with India receiving military, financial, and diplomatic support from the Soviet Union during its conflict with Pakistan in the same year. Despite India being at the forefront of the non-aligned movement, Gandhi led India to become one of the Soviet Union's closest allies in Asia, with India and the Soviet Union often supporting each other in proxy wars and at the United Nations. Citing separatist tendencies and in response to a call for revolution, Gandhi instituted a state of emergency from 1975 to 1977, during which basic civil liberties were suspended and the press was censored. Widespread atrocities were carried out during that period. Gandhi faced the growing Sikh separatism throughout her third premiership; in response, she ordered Operation Blue Star, which involved military action in the Golden Temple and resulted in bloodshed with hundreds of Sikhs killed. On 31 October 1984, Gandhi was assassinated by her bodyguards, both of whom were Sikh nationalists seeking retribution for the events at the temple. Indira Gandhi is remembered as the most powerful woman in the world during her tenure. Her supporters cite her leadership during victories over geopolitical rivals China and Pakistan, the Green Revolution, a growing economy in the early 1980s, and her anti-poverty campaign that led her to be known as "Mother Indira" among the country's poor and rural classes. However, critics note her authoritarian rule of India during the Emergency. In 1999, Gandhi was named "Woman of the Millennium" in an online poll organized by the BBC. In 2020, Gandhi was named by Time magazine among the 100 women who defined the past century as counterparts to the magazine's previous choices for Man of the Year. | 2001-10-23T04:44:51Z | 2023-12-30T15:13:08Z | [
"Template:Cite journal",
"Template:OL author",
"Template:Ministry of Communications (India)",
"Template:Pp-protected",
"Template:Sfn",
"Template:Blockquote",
"Template:Notelist",
"Template:Cite news",
"Template:Authority control",
"Template:Main",
"Template:Center",
"Template:Quote box",
"Template:Navboxes",
"Template:External Affairs Ministers of India",
"Template:Prime Ministers of India",
"Template:Ministry of Commerce and Industry (India)",
"Template:Bharat Ratna",
"Template:Short description",
"Template:IPA-hi",
"Template:ISBN",
"Template:Reflist",
"Template:Citation",
"Template:Refend",
"Template:Energy Ministries and Departments of India",
"Template:Use dmy dates",
"Template:See also",
"Template:Dead link",
"Template:Cbignore",
"Template:Cite AV media",
"Template:Use Indian English",
"Template:Literal translation",
"Template:Portal",
"Template:Webarchive",
"Template:Ministry of Finance (India)",
"Template:Cite web",
"Template:Cite book",
"Template:Cite magazine",
"Template:Curlie",
"Template:Defence Ministers of India",
"Template:Indian National Congress Presidents",
"Template:Infobox officeholder",
"Template:Indira Gandhi series",
"Template:Further",
"Template:Citation needed",
"Template:Flag",
"Template:Refbegin",
"Template:Sister project links",
"Template:IMDb name",
"Template:Home Ministry (India)",
"Template:Ministers of Information and Broadcasting"
]
| https://en.wikipedia.org/wiki/Indira_Gandhi |
15,180 | Intergovernmentalism | In international relations, intergovernmentalism treats states (and national governments in particular) as the primary actors in the integration process. Intergovernmentalist approaches claim to be able to explain both periods of radical change in the European Union because of converging governmental preferences and periods of inertia because of diverging national interests. Intergovernmentalism is distinguishable from realism and neorealism because it recognized the significance of institutionalisation in international politics and the impact of domestic politics upon governmental preferences.
The best-known example of regional integration is the European Union (EU), an economic and political intergovernmental organisation of 27 member states, all in Europe. The EU operates through a system of supranational independent institutions and intergovernmental negotiated decisions by the member states. Institutions of the EU include the European Commission, the Council of the European Union, the European Council, the Court of Justice of the European Union, the European Central Bank, the Court of Auditors, and the European Parliament. The European Parliament is elected every five years by EU citizens.
The EU has developed a single market through a standardised system of laws that apply in all member states. Within the Schengen Area (which includes 22 EU and 4 non-EU European states) passport controls have been abolished. EU policies favour the free movement of people, goods, services, and capital within its boundaries, enact legislation in justice and home affairs, and maintain common policies on trade, agriculture, fisheries and regional development.
A monetary union, the eurozone, was established in 1999 and is composed of 17 member states. Through the Common Foreign and Security Policy the EU has developed a role in external relations and defence. Permanent diplomatic missions have been established around the world. The EU is represented at the United Nations, the World Trade Organization, the G8 and the G-20.
Intergovernmentalism represents a way for limiting the conferral of powers upon supranational institutions, halting the emergence of common policies. In the current institutional system of the EU, the European Council and the Council play the role of the institutions which have the last word about decisions and policies of the EU, institutionalizing a de facto intergovernmental control over the EU as a whole, with the possibility to give more power to a small group of states. This extreme consequence can create the condition of supremacy of someone over someone else violating the principle of a "Union of Equals".
The African Union (AU, or, in its other official languages, UA) is a continental intergovernmental union, similar but less integrated to the EU, consisting of 54 African states. The AU was presented on 26 May 2001 in Addis Ababa, Ethiopia and officially founded on 9 July 2002 in Durban, South Africa to replace the Organisation of African Unity (OAU). The most important decisions of the AU are made by the Assembly of the African Union, a semi-annual meeting of the heads of state and government of its member states. The AU's secretariat, the African Union Commission, is based in Addis Ababa, Ethiopia. | [
{
"paragraph_id": 0,
"text": "In international relations, intergovernmentalism treats states (and national governments in particular) as the primary actors in the integration process. Intergovernmentalist approaches claim to be able to explain both periods of radical change in the European Union because of converging governmental preferences and periods of inertia because of diverging national interests. Intergovernmentalism is distinguishable from realism and neorealism because it recognized the significance of institutionalisation in international politics and the impact of domestic politics upon governmental preferences.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The best-known example of regional integration is the European Union (EU), an economic and political intergovernmental organisation of 27 member states, all in Europe. The EU operates through a system of supranational independent institutions and intergovernmental negotiated decisions by the member states. Institutions of the EU include the European Commission, the Council of the European Union, the European Council, the Court of Justice of the European Union, the European Central Bank, the Court of Auditors, and the European Parliament. The European Parliament is elected every five years by EU citizens.",
"title": "Regional integration"
},
{
"paragraph_id": 2,
"text": "The EU has developed a single market through a standardised system of laws that apply in all member states. Within the Schengen Area (which includes 22 EU and 4 non-EU European states) passport controls have been abolished. EU policies favour the free movement of people, goods, services, and capital within its boundaries, enact legislation in justice and home affairs, and maintain common policies on trade, agriculture, fisheries and regional development.",
"title": "Regional integration"
},
{
"paragraph_id": 3,
"text": "A monetary union, the eurozone, was established in 1999 and is composed of 17 member states. Through the Common Foreign and Security Policy the EU has developed a role in external relations and defence. Permanent diplomatic missions have been established around the world. The EU is represented at the United Nations, the World Trade Organization, the G8 and the G-20.",
"title": "Regional integration"
},
{
"paragraph_id": 4,
"text": "Intergovernmentalism represents a way for limiting the conferral of powers upon supranational institutions, halting the emergence of common policies. In the current institutional system of the EU, the European Council and the Council play the role of the institutions which have the last word about decisions and policies of the EU, institutionalizing a de facto intergovernmental control over the EU as a whole, with the possibility to give more power to a small group of states. This extreme consequence can create the condition of supremacy of someone over someone else violating the principle of a \"Union of Equals\".",
"title": "Regional integration"
},
{
"paragraph_id": 5,
"text": "The African Union (AU, or, in its other official languages, UA) is a continental intergovernmental union, similar but less integrated to the EU, consisting of 54 African states. The AU was presented on 26 May 2001 in Addis Ababa, Ethiopia and officially founded on 9 July 2002 in Durban, South Africa to replace the Organisation of African Unity (OAU). The most important decisions of the AU are made by the Assembly of the African Union, a semi-annual meeting of the heads of state and government of its member states. The AU's secretariat, the African Union Commission, is based in Addis Ababa, Ethiopia.",
"title": "Regional integration"
}
]
| In international relations, intergovernmentalism treats states as the primary actors in the integration process. Intergovernmentalist approaches claim to be able to explain both periods of radical change in the European Union because of converging governmental preferences and periods of inertia because of diverging national interests. Intergovernmentalism is distinguishable from realism and neorealism because it recognized the significance of institutionalisation in international politics and the impact of domestic politics upon governmental preferences. | 2001-10-23T23:07:56Z | 2023-08-24T01:22:07Z | [
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Cite book",
"Template:International relations theory",
"Template:Short description",
"Template:Use dmy dates",
"Template:International relations theory sidebar",
"Template:See also",
"Template:Cite web",
"Template:Cite dictionary"
]
| https://en.wikipedia.org/wiki/Intergovernmentalism |
15,181 | Individualism | Individualism is the moral stance, political philosophy, ideology and social outlook that emphasizes the intrinsic worth of the individual. Individualists promote realizing one's goals and desires, valuing independence and self-reliance, and advocating that the interests of the individual should gain precedence over the state or a social group, while opposing external interference upon one's own interests by society or institutions such as the government. Individualism makes the individual its focus, and so starts "with the fundamental premise that the human individual is of primary importance in the struggle for liberation".
Individualism is often defined in contrast to totalitarianism, collectivism and more corporate social forms.
Individualism has been used as a term denoting "[t]he quality of being an individual; individuality", related to possessing "[a]n individual characteristic; a quirk". Individualism is also associated with artistic and bohemian interests and lifestyles where there is a tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors It is also associated with humanist philosophical positions and ethics.
In the English language, the word individualism was first introduced as a pejorative by utopian socialists such as the Owenites in the late 1830s, although it is unclear if they were influenced by Saint-Simonianism or came up with it independently. A more positive use of the term in Britain came to be used with the writings of James Elishama Smith, who was a millenarian and a Christian Israelite. Although an early follower of Robert Owen, he eventually rejected Owen's collective idea of property and found in individualism a "universalism" that allowed for the development of the "original genius". Without individualism, Smith argued that individuals cannot amass property to increase one's happiness. William Maccall, another Unitarian preacher and probably an acquaintance of Smith, came somewhat later, although influenced by John Stuart Mill, Thomas Carlyle and German Romanticism, to the same positive conclusions in his 1847 work Elements of Individualism.
An individual is a person or any specific object in a collection. In the 15th century and earlier, and also today within the fields of statistics and metaphysics, individual means "indivisible", typically describing any numerically singular thing, but sometimes meaning "a person" as in "the problem of proper names". From the 17th century on, individual indicates separateness, as in individualism. Individuality is the state or quality of being an individuated being; a person separated from everything with unique character by possessing their own needs, goals, and desires in comparison to other persons.
The principle of individuation, or principium individuationis, describes the manner in which a thing is identified as distinguished from other things. For Carl Jung, individuation is a process of transformation, whereby the personal and collective unconscious is brought into consciousness (by means of dreams, active imagination or free association to take examples) to be assimilated into the whole personality. It is a completely natural process necessary for the integration of the psyche to take place. Jung considered individuation to be the central process of human development. In L'individuation psychique et collective, Gilbert Simondon developed a theory of individual and collective individuation in which the individual subject is considered as an effect of individuation rather than a cause. Thus, the individual atom is replaced by a never-ending ontological process of individuation. Individuation is an always incomplete process, always leaving a "pre-individual" left-over, itself making possible future individuations. The philosophy of Bernard Stiegler draws upon and modifies the work of Gilbert Simondon on individuation and also upon similar ideas in Friedrich Nietzsche and Sigmund Freud. For Stiegler, "the I, as a psychic individual, can only be thought in relationship to we, which is a collective individual. The I is constituted in adopting a collective tradition, which it inherits and in which a plurality of I's acknowledge each other's existence."
Individualism holds that a person taking part in society attempts to learn and discover what his or her own interests are on a personal basis, without a presumed following of the interests of a societal structure (an individualist need not be an egoist). The individualist does not necessarily follow one particular philosophy. He may create an amalgamation of elements of many philosophies, based on personal interests in particular aspects that he finds of use. On a societal level, the individualist participates on a personally structured political and moral ground. Independent thinking and opinion is a necessary trait of an individualist. Jean-Jacques Rousseau, claims that his concept of general will in The Social Contract is not the simple collection of individual wills and that it furthers the interests of the individual (the constraint of law itself would be beneficial for the individual, as the lack of respect for the law necessarily entails, in Rousseau's eyes, a form of ignorance and submission to one's passions instead of the preferred autonomy of reason).
Individualism versus collectivism is a common dichotomy in cross-cultural research. Global comparative studies have found that the world's cultures vary in the degree to which they emphasize individual autonomy, freedom and initiative (individualistic traits), respectively conformity to group norms, maintaining traditions and obedience to in-group authority (collectivistic traits). Cultural differences between individualism and collectivism are differences in degrees, not in kind. Cultural individualism is strongly correlated with GDP per capita and venture capital investments. The cultures of economically developed regions such as Australia, New Zealand, Japan, South Korea, North America and Western Europe are the most individualistic in the world. Middle income regions such as Eastern Europe, South America and mainland East Asia have cultures which are neither very individualistic nor very collectivistic. The most collectivistic cultures in the world are from economically developing regions such as the Middle East and Northern Africa, Sub-Saharan Africa, South and South-East Asia, Central Asia and Central America.
An earlier analysis by Ruth Benedict in her book The Chrysanthemum and the Sword states that societies and groups can differ in the extent to which they are based upon predominantly "self-regarding" (individualistic, and/or self-interested) behaviors, rather than "other-regarding" (group-oriented, and group, or society-minded) behaviors. Ruth Benedict made a distinction, relevant in this context, between guilt societies (e.g. medieval Europe) with an "internal reference standard" and shame societies (e.g. Japan, "bringing shame upon one's ancestors") with an "external reference standard", where people look to their peers for feedback on whether an action is acceptable or not.
Individualism is often contrasted either with totalitarianism or with collectivism, but there is a spectrum of behaviors at the societal level ranging from highly individualistic societies through mixed societies to collectivist.
According to an Oxford Dictionary, "competitive individualism" in sociology is "the view that achievement and non-achievement should depend on merit. Effort and ability are regarded as prerequisites of success. Competition is seen as an acceptable means of distributing limited resources and rewards.
Methodological individualism is the view that phenomena can only be understood by examining how they result from the motivations and actions of individual agents. In economics, people's behavior is explained in terms of rational choices, as constrained by prices and incomes. The economist accepts individuals' preferences as givens. Becker and Stigler provide a forceful statement of this view:
On the traditional view, an explanation of economic phenomena that reaches a difference in tastes between people or times is the terminus of the argument: the problem is abandoned at this point to whoever studies and explains tastes (psychologists? anthropologists? phrenologists? sociobiologists?). On our preferred interpretation, one never reaches this impasse: the economist continues to search for differences in prices or incomes to explain any differences or changes in behavior.
"With the abolition of private property, then, we shall have true, beautiful, healthy Individualism. Nobody will waste his life in accumulating things, and the symbols for things. One will live. To live is the rarest thing in the world. Most people exist, that is all."
—Oscar Wilde, The Soul of Man under Socialism, 1891
Individualists are chiefly concerned with protecting individual autonomy against obligations imposed by social institutions (such as the state or religious morality). For L. Susan Brown, "Liberalism and anarchism are two political philosophies that are fundamentally concerned with individual freedom yet differ from one another in very distinct ways. Anarchism shares with liberalism a radical commitment to individual freedom while rejecting liberalism's competitive property relations."
Civil libertarianism is a strain of political thought that supports civil liberties, or which emphasizes the supremacy of individual rights and personal freedoms over and against any kind of authority (such as a state, a corporation and social norms imposed through peer pressure, among others). Civil libertarianism is not a complete ideology; rather, it is a collection of views on the specific issues of civil liberties and civil rights. Because of this, a civil libertarian outlook is compatible with many other political philosophies, and civil libertarianism is found on both the right and left in modern politics. For scholar Ellen Meiksins Wood, "there are doctrines of individualism that are opposed to Lockean individualism [...] and non-Lockean individualism may encompass socialism".
British historians such as Emily Robinson, Camilla Schofield, Florence Sutcliffe-Braithwaite and Natalie Thomlinson have argued that Britons were keen about defining and claiming their individual rights, identities and perspectives by the 1970s, demanding greater personal autonomy and self-determination and less outside control, angrily complaining that the establishment was withholding it. Historians argue that this shift in concerns helped cause Thatcherism and was incorporated into Thatcherism's appeal.
Within anarchism, individualist anarchism represents several traditions of thought within the anarchist movement that emphasize the individual and their will over any kinds of external determinants such as groups, society, traditions and ideological systems. Individualist anarchism is not a single philosophy but refers to a group of individualistic philosophies that sometimes are in conflict.
In 1793, William Godwin, who has often been cited as the first anarchist, wrote Political Justice, which some consider to be the first expression of anarchism. Godwin, a philosophical anarchist, from a rationalist and utilitarian basis opposed revolutionary action and saw a minimal state as a present "necessary evil" that would become increasingly irrelevant and powerless by the gradual spread of knowledge. Godwin advocated individualism, proposing that all cooperation in labour be eliminated on the premise that this would be most conducive with the general good.
An influential form of individualist anarchism called egoism, or egoist anarchism, was expounded by one of the earliest and best-known proponents of individualist anarchism, the German Max Stirner. Stirner's The Ego and Its Own, published in 1844, is a founding text of the philosophy. According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire, without regard for God, state, or morality. To Stirner, rights were spooks in the mind, and he held that society does not exist but "the individuals are its reality". Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties' support through an act of will, which Stirner proposed as a form of organization in place of the state. Egoist anarchists claim that egoism will foster genuine and spontaneous union between individuals. Egoist anarchism has inspired many interpretations of Stirner's philosophy. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay.
Josiah Warren is widely regarded as the first American anarchist and The Peaceful Revolutionist, the four-page weekly paper he edited during 1833, was the first anarchist periodical published. For American anarchist historian Eunice Minette Schuster, "[i]t is apparent [...] that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews. [...] William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form". Henry David Thoreau was an important early influence in individualist anarchist thought in the United States and Europe. Thoreau was an American author, poet, naturalist, tax resister, development critic, surveyor, historian, philosopher and leading transcendentalist, who is best known for his book Walden, a reflection upon simple living in natural surroundings, and his essay Civil Disobedience, an argument for individual resistance to civil government in moral opposition to an unjust state. Later, Benjamin Tucker fused Stirner's egoism with the economics of Warren and Proudhon in his eclectic influential publication Liberty.
From these early influences, anarchism and especially individualist anarchism was related to the issues of love and sex. In different countries, this attracted a small but diverse following of bohemian artists and intellectuals, free love and birth control advocates, individualist naturists nudists as in anarcho-naturism, freethought and anti-clerical activists as well as young anarchist outlaws in what came to be known as illegalism and individual reclamation, especially within European individualist anarchism and individualist anarchism in France. These authors and activists included Oscar Wilde, Émile Armand, Han Ryner, Henri Zisly, Renzo Novatore, Miguel Giménez Igualada, Adolf Brand and Lev Chernyi among others. In his important essay The Soul of Man Under Socialism from 1891, Wilde defended socialism as the way to guarantee individualism and so he saw that "[w]ith the abolition of private property, then, we shall have true, beautiful, healthy Individualism. Nobody will waste his life in accumulating things, and the symbols for things. One will live. To live is the rarest thing in the world. Most people exist, that is all". For anarchist historian George Woodcock, "Wilde's aim in The Soul of Man Under Socialism is to seek the society most favorable to the artist. [...] for Wilde art is the supreme end, containing within itself enlightenment and regeneration, to which all else in society must be subordinated. [...] Wilde represents the anarchist as aesthete". Woodcock finds that "[t]he most ambitious contribution to literary anarchism during the 1890s was undoubtedly Oscar Wilde The Soul of Man Under Socialism" and finds that it is influenced mainly by the thought of William Godwin.
Autarchism promotes the principles of individualism, the moral ideology of individual liberty and self-reliance whilst rejecting compulsory government and supporting the elimination of government in favor of ruling oneself to the exclusion of rule by others. Robert LeFevre, recognized as an autarchist by anarcho-capitalist Murray Rothbard, distinguished autarchism from anarchy, whose economics he felt entailed interventions contrary to freedom in contrast to his own laissez-faire economics of the Austrian School.
Liberalism is thought "that attaches [and advances the] importance...[of]...the civil and political rights of individuals and their freedoms of speech and expression." This belief is widely accepted in the United States, Europe, Australia and other Western nations, and was recognized as an important value by many Western philosophers throughout history, in particular since the Enlightenment. It is often rejected by collectivist ideas such as in Abrahamic or Confucian societies, although Taoists were and are known to be individualists. The Roman Emperor Marcus Aurelius wrote praising "the idea of a polity administered with regard to equal rights and equal freedom of speech, and the idea of a kingly government which respects most of all the freedom of the governed".
Liberalism has its roots in the Age of Enlightenment and rejects many foundational assumptions that dominated most earlier theories of government, such as the Divine Right of Kings, hereditary status, and established religion. John Locke and Montesquieu are often credited with the philosophical foundations of classical liberalism, a political ideology inspired by the broader liberal movement. He wrote "no one ought to harm another in his life, health, liberty, or possessions."
In the 17th century, liberal ideas began to influence European governments in nations such as the Netherlands, Switzerland, England and Poland, but they were strongly opposed, often by armed might, by those who favored absolute monarchy and established religion. In the 18th century, the first modern liberal state was founded without a monarch or a hereditary aristocracy in the United States of America. The US Declaration of Independence includes the words which echo Locke that "all men are created equal; that they are endowed by their Creator with certain unalienable rights; that among these are life, liberty, and the pursuit of happiness; that to insure these rights, governments are instituted among men, deriving their just powers from the consent of the governed."
Liberalism comes in many forms. According to John N. Gray, the essence of liberalism is toleration of different beliefs and of different ideas as to what constitutes a good life.
Liberalism generally values differing political opinions, even if they clash and cause discord.
Egoist anarchism is a school of anarchist thought that originated in the philosophy of Max Stirner, a 19th-century Hegelian philosopher whose "name appears with familiar regularity in historically orientated surveys of anarchist thought as one of the earliest and best-known exponents of individualist anarchism." According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire, without regard for God, state, or morality. Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties' support through an act of will which Stirner proposed as a form of organisation in place of the state.
Egoist anarchists argue that egoism will foster genuine and spontaneous union between individuals. Egoism has inspired many interpretations of Stirner's philosophy, but it has also gone beyond Stirner within anarchism. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay. John Beverley Robinson wrote an essay called "Egoism" in which he states that "Modern egoism, as propounded by Stirner and Nietzsche, and expounded by Ibsen, Shaw and others, is all these; but it is more. It is the realization by the individual that they are an individual; that, as far as they are concerned, they are the only individual." Stirner and Nietzsche, who exerted influence on anarchism despite its opposition, were frequently compared by French "literary anarchists" and anarchist interpretations of Nietzschean ideas appear to have also been influential in the United States.
Ethical egoism, also called simply egoism, is the normative ethical position that moral agents ought to do what is in their own self-interest. It differs from psychological egoism, which claims that people do only act in their self-interest. Ethical egoism also differs from rational egoism which holds merely that it is rational to act in one's self-interest. However, these doctrines may occasionally be combined with ethical egoism.
Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help and serve others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self (also known as the subject) with no higher regard than one has for others (as egoism does, by elevating self-interests and "the self" to a status not granted to others), but that one also should not (as altruism does) sacrifice one's own interests to help others' interests, so long as one's own interests (i.e. one's own desires or well-being) are substantially-equivalent to the others' interests and well-being. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism (i.e. subject-focused or subjective), but utilitarianism is called agent-neutral (i.e. objective and impartial) as it does not treat the subject's (i.e. the self's, i.e. the moral "agent's") own interests as being more or less important than if the same interests, desires, or well-being were anyone else's.
Ethical egoism does not require moral agents to harm the interests and well-being of others when making moral deliberation, e.g. what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that in pursuing self-interest one ought always to do what one wants to do, e.g. in the long term the fulfilment of short-term desires may prove detrimental to the self. Fleeting pleasance then takes a back seat to protracted eudaemonia. In the words of James Rachels, "[e]thical egoism [...] endorses selfishness, but it doesn't endorse foolishness."
Ethical egoism is sometimes the philosophical basis for support of libertarianism or individualist anarchism as in Max Stirner, although these can also be based on altruistic motivations. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action.
Existentialism is a term applied to the work of a number of 19th- and 20th-century philosophers who generally held, despite profound doctrinal differences, that the focus of philosophical thought should be to deal with the conditions of existence of the individual person and his or her emotions, actions, responsibilities, and thoughts. The early 19th century philosopher Søren Kierkegaard, posthumously regarded as the father of existentialism, maintained that the individual solely has the responsibilities of giving one's own life meaning and living that life passionately and sincerely, in spite of many existential obstacles and distractions including despair, angst, absurdity, alienation and boredom.
Subsequent existential philosophers retain the emphasis on the individual, but differ in varying degrees on how one achieves and what constitutes a fulfilling life, what obstacles must be overcome, and what external and internal factors are involved, including the potential consequences of the existence or non-existence of God. Many existentialists have also regarded traditional systematic or academic philosophy in both style and content as too abstract and remote from concrete human experience. Existentialism became fashionable after World War II as a way to reassert the importance of human individuality and freedom.
Nietzsche's concept of the superman is closely related to the idea of individualism and the pursuit of one's own unique path and potential. As is seen in the following quote, the concept of superman reflects Nietzsche's emphasis on the need to overcome traditional moral and societal norms in order to achieve personal growth and self-realization:
Freethought holds that individuals should not accept ideas proposed as truth without recourse to knowledge and reason. Thus, freethinkers strive to build their opinions on the basis of facts, scientific inquiry and logical principles, independent of any logical fallacies or intellectually limiting effects of authority, confirmation bias, cognitive bias, conventional wisdom, popular culture, prejudice, sectarianism, tradition, urban legend and all other dogmas. Regarding religion, freethinkers hold that there is insufficient evidence to scientifically validate the existence of supernatural phenomena.
Humanism is a perspective common to a wide range of ethical stances that attaches importance to human dignity, concerns, and capabilities, particularly rationality. Although the word has many senses, its meaning comes into focus when contrasted to the supernatural or to appeals to authority. Since the 19th century, humanism has been associated with an anti-clericalism inherited from the 18th-century Enlightenment philosophes. 21st century Humanism tends to strongly endorse human rights, including reproductive rights, gender equality, social justice, and the separation of church and state. The term covers organized non-theistic religions, secular humanism, and a humanistic life stance.
Philosophical hedonism is a meta-ethical theory of value which argues that pleasure is the only intrinsic good and pain is the only intrinsic bad. The basic idea behind hedonistic thought is that pleasure (an umbrella term for all inherently likable emotions) is the only thing that is good in and of itself or by its very nature. This implies evaluating the moral worth of character or behavior according to the extent that the pleasure it produces exceeds the pain it entails.
A libertine is one devoid of most moral restraints, which are seen as unnecessary or undesirable, especially one who ignores or even spurns accepted morals and forms of behaviour sanctified by the larger society. Libertines place value on physical pleasures, meaning those experienced through the senses. As a philosophy, libertinism gained new-found adherents in the 17th, 18th, and 19th centuries, particularly in France and Great Britain. Notable among these were John Wilmot, 2nd Earl of Rochester and the Marquis de Sade. During the Baroque era in France, there existed a freethinking circle of philosophers and intellectuals who were collectively known as libertinage érudit and which included Gabriel Naudé, Élie Diodati and François de La Mothe Le Vayer. The critic Vivian de Sola Pinto linked John Wilmot, 2nd Earl of Rochester's libertinism to Hobbesian materialism.
Objectivism is a system of philosophy created by philosopher and novelist Ayn Rand which holds that reality exists independent of consciousness; human beings gain knowledge rationally from perception through the process of concept formation and inductive and deductive logic; the moral purpose of one's life is the pursuit of one's own happiness or rational self-interest. Rand thinks the only social system consistent with this morality is full respect for individual rights, embodied in pure laissez-faire capitalism; and the role of art in human life is to transform man's widest metaphysical ideas, by selective reproduction of reality, into a physical form – a work of art – that he can comprehend and to which he can respond emotionally. Objectivism celebrates man as his own hero, "with his own happiness as the moral purpose of his life, with productive achievement as his noblest activity, and reason as his only absolute."
Philosophical anarchism is an anarchist school of thought which contends that the state lacks moral legitimacy. In contrast to revolutionary anarchism, philosophical anarchism does not advocate violent revolution to eliminate it but advocates peaceful evolution to superate it. Although philosophical anarchism does not necessarily imply any action or desire for the elimination of the state, philosophical anarchists do not believe that they have an obligation or duty to obey the state, or conversely that the state has a right to command.
Philosophical anarchism is a component especially of individualist anarchism. Philosophical anarchists of historical note include Mohandas Gandhi, William Godwin, Pierre-Joseph Proudhon, Max Stirner, Benjamin Tucker and Henry David Thoreau. Contemporary philosophical anarchists include A. John Simmons and Robert Paul Wolff.
Subjectivism is a philosophical tenet that accords primacy to subjective experience as fundamental of all measure and law. In extreme forms such as solipsism, it may hold that the nature and existence of every object depends solely on someone's subjective awareness of it. In the proposition 5.632 of the Tractatus Logico-Philosophicus, Ludwig Wittgenstein wrote: "The subject doesn't belong to the world, but it is a limit of the world". Metaphysical subjectivism is the theory that reality is what we perceive to be real, and that there is no underlying true reality that exists independently of perception. One can also hold that it is consciousness rather than perception that is reality (subjective idealism). In probability, a subjectivism stands for the belief that probabilities are simply degrees-of-belief by rational agents in a certain proposition and which have no objective reality in and of themselves.
Ethical subjectivism stands in opposition to moral realism, which claims that moral propositions refer to objective facts, independent of human opinion; to error theory, which denies that any moral propositions are true in any sense; and to non-cognitivism, which denies that moral sentences express propositions at all. The most common forms of ethical subjectivism are also forms of moral relativism, with moral standards held to be relative to each culture or society, i.e. cultural relativism, or even to every individual. The latter view, as put forward by Protagoras, holds that there are as many distinct scales of good and evil as there are subjects in the world. Moral subjectivism is that species of moral relativism that relativizes moral value to the individual subject.
Horst Matthai Quelle was a Spanish language German anarchist philosopher influenced by Max Stirner. Quelle argued that since the individual gives form to the world, he is those objects, the others and the whole universe. One of his main views was a "theory of infinite worlds" which for him was developed by pre-socratic philosophers.
Solipsism is the philosophical idea that only one's own mind is sure to exist. The term comes from Latin solus ("alone") and ipse ("self"). Solipsism as an epistemological position holds that knowledge of anything outside one's own mind is unsure. The external world and other minds cannot be known, and might not exist outside the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. Solipsism is the only epistemological position that, by its own postulate, is both irrefutable and yet indefensible in the same manner. Although the number of individuals sincerely espousing solipsism has been small, it is not uncommon for one philosopher to accuse another's arguments of entailing solipsism as an unwanted consequence, in a kind of reductio ad absurdum. In the history of philosophy, solipsism has served as a skeptical hypothesis.
The doctrine of economic individualism holds that each individual should be allowed autonomy in making his or her own economic decisions as opposed to those decisions being made by the community, the corporation or the state for him or her.
Liberalism is a political ideology that developed in the 19th century in the Americas, England, France and Western Europe. It followed earlier forms of liberalism in its commitment to personal freedom and popular government, but differed from earlier forms of liberalism in its commitment to classical economics and free markets.
Notable liberals in the 19th century include Jean-Baptiste Say, Thomas Malthus and David Ricardo. Classical liberalism, sometimes also used as a label to refer to all forms of liberalism before the 20th century, was revived in the 20th century by Ludwig von Mises and Friedrich Hayek and further developed by Milton Friedman, Robert Nozick, Loren Lomasky and Jan Narveson.
Libertarianism upholds liberty as a core principle. Libertarians seek to maximize autonomy and political freedom, emphasizing free association, freedom of choice, individualism and voluntary association. Libertarianism shares a skepticism of authority and state power, but libertarians diverge on the scope of their opposition to existing economic and political systems. Various schools of libertarian thought offer a range of views regarding the legitimate functions of state and private power, often calling for the restriction or dissolution of coercive social institutions. Different categorizations have been used to distinguish various forms of libertarianism. This is done to distinguish libertarian views on the nature of property and capital, usually along left–right or socialist–capitalist lines.
Left-libertarianism represents several related yet distinct approaches to politics, society, culture and political and social theory which stress both individual and political freedom alongside social justice. Unlike right-libertarians, left-libertarians believe that neither claiming nor mixing one's labor with natural resources is enough to generate full private property rights, and maintain that natural resources (land, oil, gold, trees) ought to be held in some egalitarian manner, either unowned or owned collectively. Those left-libertarians who support property do so under different property norms and theories, or under the condition that recompense is offered to the local or global community.
Related terms include egalitarian libertarianism, left-wing libertarianism, libertarianism, libertarian socialism, social libertarianism and socialist libertarianism. Left-libertarianism can refer generally to these related and overlapping schools of thought:
Right-libertarianism represents either non-collectivist forms of libertarianism or a variety of different libertarian views that scholars label to the right of libertarianism such as libertarian conservatism. Related terms include conservative libertarianism, libertarian capitalism and right-wing libertarianism. In the mid-20th century, right-libertarian ideologies such as anarcho-capitalism and minarchism co-opted the term libertarian to advocate laissez-faire capitalism and strong private property rights such as in land, infrastructure and natural resources. The latter is the dominant form of libertarianism in the United States, where it advocates civil liberties, natural law, free-market capitalism and a major reversal of the modern welfare state.
With regard to economic questions within individualist socialist schools such as individualist anarchism, there are adherents to mutualism (Pierre Joseph Proudhon, Émile Armand and early Benjamin Tucker); natural rights positions (early Benjamin Tucker, Lysander Spooner and Josiah Warren); and egoistic disrespect for "ghosts" such as private property and markets (Max Stirner, John Henry Mackay, Lev Chernyi, later Benjamin Tucker, Renzo Novatore and illegalism). Contemporary individualist anarchist Kevin Carson characterizes American individualist anarchism saying that "[u]nlike the rest of the socialist movement, the individualist anarchists believed that the natural wage of labor in a free market was its product, and that economic exploitation could only take place when capitalists and landlords harnessed the power of the state in their interests. Thus, individualist anarchism was an alternative both to the increasing statism of the mainstream socialist movement, and to a classical liberal movement that was moving toward a mere apologetic for the power of big business."
Libertarian socialism, sometimes dubbed left-libertarianism and socialist libertarianism, is an anti-authoritarian, anti-statist and libertarian tradition within the socialist movement that rejects the state socialist conception of socialism as a statist form where the state retains centralized control of the economy. Libertarian socialists criticize wage slavery relationships within the workplace, emphasizing workers' self-management of the workplace and decentralized structures of political organization.
Libertarian socialism asserts that a society based on freedom and justice can be achieved through abolishing authoritarian institutions that control certain means of production and subordinate the majority to an owning class or political and economic elite. Libertarian socialists advocate for decentralized structures based on direct democracy and federal or confederal associations such as libertarian municipalism, citizens' assemblies, trade unions and workers' councils.
All of this is generally done within a general call for liberty and free association through the identification, criticism and practical dismantling of illegitimate authority in all aspects of human life. Within the larger socialist movement, libertarian socialism seeks to distinguish itself from Leninism and social democracy.
Past and present currents and movements commonly described as libertarian socialist include anarchism (especially anarchist schools of thought such as anarcho-communism, anarcho-syndicalism, collectivist anarchism, green anarchism, individualist anarchism, mutualism, and social anarchism) as well as communalism, some forms of democratic socialism, guild socialism, libertarian Marxism (autonomism, council communism, left communism, and Luxemburgism, among others), participism, revolutionary syndicalism and some versions of utopian socialism.
Mutualism is an anarchist school of thought which can be traced to the writings of Pierre-Joseph Proudhon, who envisioned a socialist society where each person possess a means of production, either individually or collectively, with trade representing equivalent amounts of labor in the free market. Integral to the scheme was the establishment of a mutual-credit bank which would lend to producers at a minimal interest rate only high enough to cover the costs of administration. Mutualism is based on a labor theory of value which holds that when labor or its product is sold, it ought to receive goods or services in exchange embodying "the amount of labor necessary to produce an article of exactly similar and equal utility" and that receiving anything less would be considered exploitation, theft of labor, or usury.
Plato emphasized that individuals must adhere to laws and perform duties while declining to grant individuals rights to limit or reject state interference in their lives.
German philosopher Georg Wilhelm Friedrich Hegel criticized individualism by claiming that human self-consciousness relies on recognition from others, therefore embracing a holistic view and rejecting the idea of the world as a collection of atomized individuals.
Fascists believe that the liberal emphasis on individual freedom produces national divisiveness.
The anarchist writer and bohemian Oscar Wilde wrote in his famous essay The Soul of Man under Socialism that "Art is individualism, and individualism is a disturbing and disintegrating force. There lies its immense value. For what it seeks is to disturb monotony of type, slavery of custom, tyranny of habit, and the reduction of man to the level of a machine." For anarchist historian George Woodcock, "Wilde's aim in The Soul of Man under Socialism is to seek the society most favorable to the artist, [...] for Wilde art is the supreme end, containing within itself enlightenment and regeneration, to which all else in society must be subordinated. [...] Wilde represents the anarchist as aesthete." In this way, individualism has been used to denote a personality with a strong tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors.
Anarchist writer Murray Bookchin describes a lot of individualist anarchists as people who "expressed their opposition in uniquely personal forms, especially in fiery tracts, outrageous behavior, and aberrant lifestyles in the cultural ghettos of fin de siècle New York, Paris, and London. As a credo, individualist anarchism remained largely a bohemian lifestyle, most conspicuous in its demands for sexual freedom ('free love') and enamored of innovations in art, behavior, and clothing."
In relation to this view of individuality, French individualist anarchist Émile Armand advocated egoistical denial of social conventions and dogmas to live in accord to one's own ways and desires in daily life since he emphasized anarchism as a way of life and practice. In this way, he opined that "the anarchist individualist tends to reproduce himself, to perpetuate his spirit in other individuals who will share his views and who will make it possible for a state of affairs to be established from which authoritarianism has been banished. It is this desire, this will, not only to live, but also to reproduce oneself, which we shall call 'activity.'"
In the book Imperfect Garden: The Legacy of Humanism, humanist philosopher Tzvetan Todorov identifies individualism as an important current of socio-political thought within modernity and as examples of it he mentions Michel de Montaigne, François de La Rochefoucauld, Marquis de Sade, and Charles Baudelaire. In La Rochefoucauld, he identifies a tendency similar to stoicism in which "the honest person works his being in the manner of a sculptor who searches the liberation of the forms which are inside a block of marble, to extract the truth of that matter." In Baudelaire, he finds the dandy trait in which one searches to cultivate "the idea of beauty within oneself, of satisfying one's passions of feeling and thinking."
The Russian-American poet Joseph Brodsky once wrote that "[t]he surest defense against Evil is extreme individualism, originality of thinking, whimsicality, even – if you will – eccentricity. That is, something that can't be feigned, faked, imitated; something even a seasoned imposter couldn't be happy with." Ralph Waldo Emerson famously declared that "[w]hoso would be a man must be a nonconformist" – a point of view developed at length in both the life and work of Henry David Thoreau. Equally memorable and influential on Walt Whitman is Emerson's idea that "a foolish consistency is the hobgoblin of small minds, adored by little statesmen and philosophers and divines." Emerson opposed on principle the reliance on civil and religious social structures precisely because through them the individual approaches the divine second-hand, mediated by the once original experience of a genius from another age. According to Emerson, "[an institution is the lengthened shadow of one man." To achieve this original relation, Emerson stated that one must "[i]nsist on one's self; never imitate", for if the relationship is secondary the connection is lost.
People in Western countries tend to be more individualistic than communitarian. The authors of one study proposed that this difference is due in part to the influence of the Catholic Church in the Middle Ages. They pointed specifically to its bans on incest, cousin marriage, adoption, and remarriage, and its promotion of the nuclear family over the extended family.
The Catholic Church teaches "if we pray the Our Father sincerely, we leave individualism behind, because the love that we receive frees us ... our divisions and oppositions have to be overcome" Many Catholics have believed Martin Luther and the Protestant Reformation were sources of individualism. | [
{
"paragraph_id": 0,
"text": "Individualism is the moral stance, political philosophy, ideology and social outlook that emphasizes the intrinsic worth of the individual. Individualists promote realizing one's goals and desires, valuing independence and self-reliance, and advocating that the interests of the individual should gain precedence over the state or a social group, while opposing external interference upon one's own interests by society or institutions such as the government. Individualism makes the individual its focus, and so starts \"with the fundamental premise that the human individual is of primary importance in the struggle for liberation\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Individualism is often defined in contrast to totalitarianism, collectivism and more corporate social forms.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Individualism has been used as a term denoting \"[t]he quality of being an individual; individuality\", related to possessing \"[a]n individual characteristic; a quirk\". Individualism is also associated with artistic and bohemian interests and lifestyles where there is a tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors It is also associated with humanist philosophical positions and ethics.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the English language, the word individualism was first introduced as a pejorative by utopian socialists such as the Owenites in the late 1830s, although it is unclear if they were influenced by Saint-Simonianism or came up with it independently. A more positive use of the term in Britain came to be used with the writings of James Elishama Smith, who was a millenarian and a Christian Israelite. Although an early follower of Robert Owen, he eventually rejected Owen's collective idea of property and found in individualism a \"universalism\" that allowed for the development of the \"original genius\". Without individualism, Smith argued that individuals cannot amass property to increase one's happiness. William Maccall, another Unitarian preacher and probably an acquaintance of Smith, came somewhat later, although influenced by John Stuart Mill, Thomas Carlyle and German Romanticism, to the same positive conclusions in his 1847 work Elements of Individualism.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "An individual is a person or any specific object in a collection. In the 15th century and earlier, and also today within the fields of statistics and metaphysics, individual means \"indivisible\", typically describing any numerically singular thing, but sometimes meaning \"a person\" as in \"the problem of proper names\". From the 17th century on, individual indicates separateness, as in individualism. Individuality is the state or quality of being an individuated being; a person separated from everything with unique character by possessing their own needs, goals, and desires in comparison to other persons.",
"title": "Individual"
},
{
"paragraph_id": 5,
"text": "The principle of individuation, or principium individuationis, describes the manner in which a thing is identified as distinguished from other things. For Carl Jung, individuation is a process of transformation, whereby the personal and collective unconscious is brought into consciousness (by means of dreams, active imagination or free association to take examples) to be assimilated into the whole personality. It is a completely natural process necessary for the integration of the psyche to take place. Jung considered individuation to be the central process of human development. In L'individuation psychique et collective, Gilbert Simondon developed a theory of individual and collective individuation in which the individual subject is considered as an effect of individuation rather than a cause. Thus, the individual atom is replaced by a never-ending ontological process of individuation. Individuation is an always incomplete process, always leaving a \"pre-individual\" left-over, itself making possible future individuations. The philosophy of Bernard Stiegler draws upon and modifies the work of Gilbert Simondon on individuation and also upon similar ideas in Friedrich Nietzsche and Sigmund Freud. For Stiegler, \"the I, as a psychic individual, can only be thought in relationship to we, which is a collective individual. The I is constituted in adopting a collective tradition, which it inherits and in which a plurality of I's acknowledge each other's existence.\"",
"title": "Individual"
},
{
"paragraph_id": 6,
"text": "Individualism holds that a person taking part in society attempts to learn and discover what his or her own interests are on a personal basis, without a presumed following of the interests of a societal structure (an individualist need not be an egoist). The individualist does not necessarily follow one particular philosophy. He may create an amalgamation of elements of many philosophies, based on personal interests in particular aspects that he finds of use. On a societal level, the individualist participates on a personally structured political and moral ground. Independent thinking and opinion is a necessary trait of an individualist. Jean-Jacques Rousseau, claims that his concept of general will in The Social Contract is not the simple collection of individual wills and that it furthers the interests of the individual (the constraint of law itself would be beneficial for the individual, as the lack of respect for the law necessarily entails, in Rousseau's eyes, a form of ignorance and submission to one's passions instead of the preferred autonomy of reason).",
"title": "Individualism and society"
},
{
"paragraph_id": 7,
"text": "Individualism versus collectivism is a common dichotomy in cross-cultural research. Global comparative studies have found that the world's cultures vary in the degree to which they emphasize individual autonomy, freedom and initiative (individualistic traits), respectively conformity to group norms, maintaining traditions and obedience to in-group authority (collectivistic traits). Cultural differences between individualism and collectivism are differences in degrees, not in kind. Cultural individualism is strongly correlated with GDP per capita and venture capital investments. The cultures of economically developed regions such as Australia, New Zealand, Japan, South Korea, North America and Western Europe are the most individualistic in the world. Middle income regions such as Eastern Europe, South America and mainland East Asia have cultures which are neither very individualistic nor very collectivistic. The most collectivistic cultures in the world are from economically developing regions such as the Middle East and Northern Africa, Sub-Saharan Africa, South and South-East Asia, Central Asia and Central America.",
"title": "Individualism and society"
},
{
"paragraph_id": 8,
"text": "An earlier analysis by Ruth Benedict in her book The Chrysanthemum and the Sword states that societies and groups can differ in the extent to which they are based upon predominantly \"self-regarding\" (individualistic, and/or self-interested) behaviors, rather than \"other-regarding\" (group-oriented, and group, or society-minded) behaviors. Ruth Benedict made a distinction, relevant in this context, between guilt societies (e.g. medieval Europe) with an \"internal reference standard\" and shame societies (e.g. Japan, \"bringing shame upon one's ancestors\") with an \"external reference standard\", where people look to their peers for feedback on whether an action is acceptable or not.",
"title": "Individualism and society"
},
{
"paragraph_id": 9,
"text": "Individualism is often contrasted either with totalitarianism or with collectivism, but there is a spectrum of behaviors at the societal level ranging from highly individualistic societies through mixed societies to collectivist.",
"title": "Individualism and society"
},
{
"paragraph_id": 10,
"text": "According to an Oxford Dictionary, \"competitive individualism\" in sociology is \"the view that achievement and non-achievement should depend on merit. Effort and ability are regarded as prerequisites of success. Competition is seen as an acceptable means of distributing limited resources and rewards.",
"title": "Individualism and society"
},
{
"paragraph_id": 11,
"text": "Methodological individualism is the view that phenomena can only be understood by examining how they result from the motivations and actions of individual agents. In economics, people's behavior is explained in terms of rational choices, as constrained by prices and incomes. The economist accepts individuals' preferences as givens. Becker and Stigler provide a forceful statement of this view:",
"title": "Individualism and society"
},
{
"paragraph_id": 12,
"text": "On the traditional view, an explanation of economic phenomena that reaches a difference in tastes between people or times is the terminus of the argument: the problem is abandoned at this point to whoever studies and explains tastes (psychologists? anthropologists? phrenologists? sociobiologists?). On our preferred interpretation, one never reaches this impasse: the economist continues to search for differences in prices or incomes to explain any differences or changes in behavior.",
"title": "Individualism and society"
},
{
"paragraph_id": 13,
"text": "\"With the abolition of private property, then, we shall have true, beautiful, healthy Individualism. Nobody will waste his life in accumulating things, and the symbols for things. One will live. To live is the rarest thing in the world. Most people exist, that is all.\"",
"title": "Political individualism"
},
{
"paragraph_id": 14,
"text": "—Oscar Wilde, The Soul of Man under Socialism, 1891",
"title": "Political individualism"
},
{
"paragraph_id": 15,
"text": "Individualists are chiefly concerned with protecting individual autonomy against obligations imposed by social institutions (such as the state or religious morality). For L. Susan Brown, \"Liberalism and anarchism are two political philosophies that are fundamentally concerned with individual freedom yet differ from one another in very distinct ways. Anarchism shares with liberalism a radical commitment to individual freedom while rejecting liberalism's competitive property relations.\"",
"title": "Political individualism"
},
{
"paragraph_id": 16,
"text": "Civil libertarianism is a strain of political thought that supports civil liberties, or which emphasizes the supremacy of individual rights and personal freedoms over and against any kind of authority (such as a state, a corporation and social norms imposed through peer pressure, among others). Civil libertarianism is not a complete ideology; rather, it is a collection of views on the specific issues of civil liberties and civil rights. Because of this, a civil libertarian outlook is compatible with many other political philosophies, and civil libertarianism is found on both the right and left in modern politics. For scholar Ellen Meiksins Wood, \"there are doctrines of individualism that are opposed to Lockean individualism [...] and non-Lockean individualism may encompass socialism\".",
"title": "Political individualism"
},
{
"paragraph_id": 17,
"text": "British historians such as Emily Robinson, Camilla Schofield, Florence Sutcliffe-Braithwaite and Natalie Thomlinson have argued that Britons were keen about defining and claiming their individual rights, identities and perspectives by the 1970s, demanding greater personal autonomy and self-determination and less outside control, angrily complaining that the establishment was withholding it. Historians argue that this shift in concerns helped cause Thatcherism and was incorporated into Thatcherism's appeal.",
"title": "Political individualism"
},
{
"paragraph_id": 18,
"text": "Within anarchism, individualist anarchism represents several traditions of thought within the anarchist movement that emphasize the individual and their will over any kinds of external determinants such as groups, society, traditions and ideological systems. Individualist anarchism is not a single philosophy but refers to a group of individualistic philosophies that sometimes are in conflict.",
"title": "Political individualism"
},
{
"paragraph_id": 19,
"text": "In 1793, William Godwin, who has often been cited as the first anarchist, wrote Political Justice, which some consider to be the first expression of anarchism. Godwin, a philosophical anarchist, from a rationalist and utilitarian basis opposed revolutionary action and saw a minimal state as a present \"necessary evil\" that would become increasingly irrelevant and powerless by the gradual spread of knowledge. Godwin advocated individualism, proposing that all cooperation in labour be eliminated on the premise that this would be most conducive with the general good.",
"title": "Political individualism"
},
{
"paragraph_id": 20,
"text": "An influential form of individualist anarchism called egoism, or egoist anarchism, was expounded by one of the earliest and best-known proponents of individualist anarchism, the German Max Stirner. Stirner's The Ego and Its Own, published in 1844, is a founding text of the philosophy. According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire, without regard for God, state, or morality. To Stirner, rights were spooks in the mind, and he held that society does not exist but \"the individuals are its reality\". Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties' support through an act of will, which Stirner proposed as a form of organization in place of the state. Egoist anarchists claim that egoism will foster genuine and spontaneous union between individuals. Egoist anarchism has inspired many interpretations of Stirner's philosophy. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay.",
"title": "Political individualism"
},
{
"paragraph_id": 21,
"text": "Josiah Warren is widely regarded as the first American anarchist and The Peaceful Revolutionist, the four-page weekly paper he edited during 1833, was the first anarchist periodical published. For American anarchist historian Eunice Minette Schuster, \"[i]t is apparent [...] that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews. [...] William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form\". Henry David Thoreau was an important early influence in individualist anarchist thought in the United States and Europe. Thoreau was an American author, poet, naturalist, tax resister, development critic, surveyor, historian, philosopher and leading transcendentalist, who is best known for his book Walden, a reflection upon simple living in natural surroundings, and his essay Civil Disobedience, an argument for individual resistance to civil government in moral opposition to an unjust state. Later, Benjamin Tucker fused Stirner's egoism with the economics of Warren and Proudhon in his eclectic influential publication Liberty.",
"title": "Political individualism"
},
{
"paragraph_id": 22,
"text": "From these early influences, anarchism and especially individualist anarchism was related to the issues of love and sex. In different countries, this attracted a small but diverse following of bohemian artists and intellectuals, free love and birth control advocates, individualist naturists nudists as in anarcho-naturism, freethought and anti-clerical activists as well as young anarchist outlaws in what came to be known as illegalism and individual reclamation, especially within European individualist anarchism and individualist anarchism in France. These authors and activists included Oscar Wilde, Émile Armand, Han Ryner, Henri Zisly, Renzo Novatore, Miguel Giménez Igualada, Adolf Brand and Lev Chernyi among others. In his important essay The Soul of Man Under Socialism from 1891, Wilde defended socialism as the way to guarantee individualism and so he saw that \"[w]ith the abolition of private property, then, we shall have true, beautiful, healthy Individualism. Nobody will waste his life in accumulating things, and the symbols for things. One will live. To live is the rarest thing in the world. Most people exist, that is all\". For anarchist historian George Woodcock, \"Wilde's aim in The Soul of Man Under Socialism is to seek the society most favorable to the artist. [...] for Wilde art is the supreme end, containing within itself enlightenment and regeneration, to which all else in society must be subordinated. [...] Wilde represents the anarchist as aesthete\". Woodcock finds that \"[t]he most ambitious contribution to literary anarchism during the 1890s was undoubtedly Oscar Wilde The Soul of Man Under Socialism\" and finds that it is influenced mainly by the thought of William Godwin.",
"title": "Political individualism"
},
{
"paragraph_id": 23,
"text": "Autarchism promotes the principles of individualism, the moral ideology of individual liberty and self-reliance whilst rejecting compulsory government and supporting the elimination of government in favor of ruling oneself to the exclusion of rule by others. Robert LeFevre, recognized as an autarchist by anarcho-capitalist Murray Rothbard, distinguished autarchism from anarchy, whose economics he felt entailed interventions contrary to freedom in contrast to his own laissez-faire economics of the Austrian School.",
"title": "Political individualism"
},
{
"paragraph_id": 24,
"text": "Liberalism is thought \"that attaches [and advances the] importance...[of]...the civil and political rights of individuals and their freedoms of speech and expression.\" This belief is widely accepted in the United States, Europe, Australia and other Western nations, and was recognized as an important value by many Western philosophers throughout history, in particular since the Enlightenment. It is often rejected by collectivist ideas such as in Abrahamic or Confucian societies, although Taoists were and are known to be individualists. The Roman Emperor Marcus Aurelius wrote praising \"the idea of a polity administered with regard to equal rights and equal freedom of speech, and the idea of a kingly government which respects most of all the freedom of the governed\".",
"title": "Political individualism"
},
{
"paragraph_id": 25,
"text": "Liberalism has its roots in the Age of Enlightenment and rejects many foundational assumptions that dominated most earlier theories of government, such as the Divine Right of Kings, hereditary status, and established religion. John Locke and Montesquieu are often credited with the philosophical foundations of classical liberalism, a political ideology inspired by the broader liberal movement. He wrote \"no one ought to harm another in his life, health, liberty, or possessions.\"",
"title": "Political individualism"
},
{
"paragraph_id": 26,
"text": "In the 17th century, liberal ideas began to influence European governments in nations such as the Netherlands, Switzerland, England and Poland, but they were strongly opposed, often by armed might, by those who favored absolute monarchy and established religion. In the 18th century, the first modern liberal state was founded without a monarch or a hereditary aristocracy in the United States of America. The US Declaration of Independence includes the words which echo Locke that \"all men are created equal; that they are endowed by their Creator with certain unalienable rights; that among these are life, liberty, and the pursuit of happiness; that to insure these rights, governments are instituted among men, deriving their just powers from the consent of the governed.\"",
"title": "Political individualism"
},
{
"paragraph_id": 27,
"text": "Liberalism comes in many forms. According to John N. Gray, the essence of liberalism is toleration of different beliefs and of different ideas as to what constitutes a good life.",
"title": "Political individualism"
},
{
"paragraph_id": 28,
"text": "Liberalism generally values differing political opinions, even if they clash and cause discord.",
"title": "Political individualism"
},
{
"paragraph_id": 29,
"text": "Egoist anarchism is a school of anarchist thought that originated in the philosophy of Max Stirner, a 19th-century Hegelian philosopher whose \"name appears with familiar regularity in historically orientated surveys of anarchist thought as one of the earliest and best-known exponents of individualist anarchism.\" According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire, without regard for God, state, or morality. Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties' support through an act of will which Stirner proposed as a form of organisation in place of the state.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 30,
"text": "Egoist anarchists argue that egoism will foster genuine and spontaneous union between individuals. Egoism has inspired many interpretations of Stirner's philosophy, but it has also gone beyond Stirner within anarchism. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay. John Beverley Robinson wrote an essay called \"Egoism\" in which he states that \"Modern egoism, as propounded by Stirner and Nietzsche, and expounded by Ibsen, Shaw and others, is all these; but it is more. It is the realization by the individual that they are an individual; that, as far as they are concerned, they are the only individual.\" Stirner and Nietzsche, who exerted influence on anarchism despite its opposition, were frequently compared by French \"literary anarchists\" and anarchist interpretations of Nietzschean ideas appear to have also been influential in the United States.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 31,
"text": "Ethical egoism, also called simply egoism, is the normative ethical position that moral agents ought to do what is in their own self-interest. It differs from psychological egoism, which claims that people do only act in their self-interest. Ethical egoism also differs from rational egoism which holds merely that it is rational to act in one's self-interest. However, these doctrines may occasionally be combined with ethical egoism.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 32,
"text": "Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help and serve others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self (also known as the subject) with no higher regard than one has for others (as egoism does, by elevating self-interests and \"the self\" to a status not granted to others), but that one also should not (as altruism does) sacrifice one's own interests to help others' interests, so long as one's own interests (i.e. one's own desires or well-being) are substantially-equivalent to the others' interests and well-being. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism (i.e. subject-focused or subjective), but utilitarianism is called agent-neutral (i.e. objective and impartial) as it does not treat the subject's (i.e. the self's, i.e. the moral \"agent's\") own interests as being more or less important than if the same interests, desires, or well-being were anyone else's.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 33,
"text": "Ethical egoism does not require moral agents to harm the interests and well-being of others when making moral deliberation, e.g. what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that in pursuing self-interest one ought always to do what one wants to do, e.g. in the long term the fulfilment of short-term desires may prove detrimental to the self. Fleeting pleasance then takes a back seat to protracted eudaemonia. In the words of James Rachels, \"[e]thical egoism [...] endorses selfishness, but it doesn't endorse foolishness.\"",
"title": "Philosophical individualism"
},
{
"paragraph_id": 34,
"text": "Ethical egoism is sometimes the philosophical basis for support of libertarianism or individualist anarchism as in Max Stirner, although these can also be based on altruistic motivations. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 35,
"text": "Existentialism is a term applied to the work of a number of 19th- and 20th-century philosophers who generally held, despite profound doctrinal differences, that the focus of philosophical thought should be to deal with the conditions of existence of the individual person and his or her emotions, actions, responsibilities, and thoughts. The early 19th century philosopher Søren Kierkegaard, posthumously regarded as the father of existentialism, maintained that the individual solely has the responsibilities of giving one's own life meaning and living that life passionately and sincerely, in spite of many existential obstacles and distractions including despair, angst, absurdity, alienation and boredom.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 36,
"text": "Subsequent existential philosophers retain the emphasis on the individual, but differ in varying degrees on how one achieves and what constitutes a fulfilling life, what obstacles must be overcome, and what external and internal factors are involved, including the potential consequences of the existence or non-existence of God. Many existentialists have also regarded traditional systematic or academic philosophy in both style and content as too abstract and remote from concrete human experience. Existentialism became fashionable after World War II as a way to reassert the importance of human individuality and freedom.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 37,
"text": "Nietzsche's concept of the superman is closely related to the idea of individualism and the pursuit of one's own unique path and potential. As is seen in the following quote, the concept of superman reflects Nietzsche's emphasis on the need to overcome traditional moral and societal norms in order to achieve personal growth and self-realization:",
"title": "Philosophical individualism"
},
{
"paragraph_id": 38,
"text": "Freethought holds that individuals should not accept ideas proposed as truth without recourse to knowledge and reason. Thus, freethinkers strive to build their opinions on the basis of facts, scientific inquiry and logical principles, independent of any logical fallacies or intellectually limiting effects of authority, confirmation bias, cognitive bias, conventional wisdom, popular culture, prejudice, sectarianism, tradition, urban legend and all other dogmas. Regarding religion, freethinkers hold that there is insufficient evidence to scientifically validate the existence of supernatural phenomena.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 39,
"text": "Humanism is a perspective common to a wide range of ethical stances that attaches importance to human dignity, concerns, and capabilities, particularly rationality. Although the word has many senses, its meaning comes into focus when contrasted to the supernatural or to appeals to authority. Since the 19th century, humanism has been associated with an anti-clericalism inherited from the 18th-century Enlightenment philosophes. 21st century Humanism tends to strongly endorse human rights, including reproductive rights, gender equality, social justice, and the separation of church and state. The term covers organized non-theistic religions, secular humanism, and a humanistic life stance.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 40,
"text": "Philosophical hedonism is a meta-ethical theory of value which argues that pleasure is the only intrinsic good and pain is the only intrinsic bad. The basic idea behind hedonistic thought is that pleasure (an umbrella term for all inherently likable emotions) is the only thing that is good in and of itself or by its very nature. This implies evaluating the moral worth of character or behavior according to the extent that the pleasure it produces exceeds the pain it entails.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 41,
"text": "A libertine is one devoid of most moral restraints, which are seen as unnecessary or undesirable, especially one who ignores or even spurns accepted morals and forms of behaviour sanctified by the larger society. Libertines place value on physical pleasures, meaning those experienced through the senses. As a philosophy, libertinism gained new-found adherents in the 17th, 18th, and 19th centuries, particularly in France and Great Britain. Notable among these were John Wilmot, 2nd Earl of Rochester and the Marquis de Sade. During the Baroque era in France, there existed a freethinking circle of philosophers and intellectuals who were collectively known as libertinage érudit and which included Gabriel Naudé, Élie Diodati and François de La Mothe Le Vayer. The critic Vivian de Sola Pinto linked John Wilmot, 2nd Earl of Rochester's libertinism to Hobbesian materialism.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 42,
"text": "Objectivism is a system of philosophy created by philosopher and novelist Ayn Rand which holds that reality exists independent of consciousness; human beings gain knowledge rationally from perception through the process of concept formation and inductive and deductive logic; the moral purpose of one's life is the pursuit of one's own happiness or rational self-interest. Rand thinks the only social system consistent with this morality is full respect for individual rights, embodied in pure laissez-faire capitalism; and the role of art in human life is to transform man's widest metaphysical ideas, by selective reproduction of reality, into a physical form – a work of art – that he can comprehend and to which he can respond emotionally. Objectivism celebrates man as his own hero, \"with his own happiness as the moral purpose of his life, with productive achievement as his noblest activity, and reason as his only absolute.\"",
"title": "Philosophical individualism"
},
{
"paragraph_id": 43,
"text": "Philosophical anarchism is an anarchist school of thought which contends that the state lacks moral legitimacy. In contrast to revolutionary anarchism, philosophical anarchism does not advocate violent revolution to eliminate it but advocates peaceful evolution to superate it. Although philosophical anarchism does not necessarily imply any action or desire for the elimination of the state, philosophical anarchists do not believe that they have an obligation or duty to obey the state, or conversely that the state has a right to command.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 44,
"text": "Philosophical anarchism is a component especially of individualist anarchism. Philosophical anarchists of historical note include Mohandas Gandhi, William Godwin, Pierre-Joseph Proudhon, Max Stirner, Benjamin Tucker and Henry David Thoreau. Contemporary philosophical anarchists include A. John Simmons and Robert Paul Wolff.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 45,
"text": "Subjectivism is a philosophical tenet that accords primacy to subjective experience as fundamental of all measure and law. In extreme forms such as solipsism, it may hold that the nature and existence of every object depends solely on someone's subjective awareness of it. In the proposition 5.632 of the Tractatus Logico-Philosophicus, Ludwig Wittgenstein wrote: \"The subject doesn't belong to the world, but it is a limit of the world\". Metaphysical subjectivism is the theory that reality is what we perceive to be real, and that there is no underlying true reality that exists independently of perception. One can also hold that it is consciousness rather than perception that is reality (subjective idealism). In probability, a subjectivism stands for the belief that probabilities are simply degrees-of-belief by rational agents in a certain proposition and which have no objective reality in and of themselves.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 46,
"text": "Ethical subjectivism stands in opposition to moral realism, which claims that moral propositions refer to objective facts, independent of human opinion; to error theory, which denies that any moral propositions are true in any sense; and to non-cognitivism, which denies that moral sentences express propositions at all. The most common forms of ethical subjectivism are also forms of moral relativism, with moral standards held to be relative to each culture or society, i.e. cultural relativism, or even to every individual. The latter view, as put forward by Protagoras, holds that there are as many distinct scales of good and evil as there are subjects in the world. Moral subjectivism is that species of moral relativism that relativizes moral value to the individual subject.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 47,
"text": "Horst Matthai Quelle was a Spanish language German anarchist philosopher influenced by Max Stirner. Quelle argued that since the individual gives form to the world, he is those objects, the others and the whole universe. One of his main views was a \"theory of infinite worlds\" which for him was developed by pre-socratic philosophers.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 48,
"text": "Solipsism is the philosophical idea that only one's own mind is sure to exist. The term comes from Latin solus (\"alone\") and ipse (\"self\"). Solipsism as an epistemological position holds that knowledge of anything outside one's own mind is unsure. The external world and other minds cannot be known, and might not exist outside the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. Solipsism is the only epistemological position that, by its own postulate, is both irrefutable and yet indefensible in the same manner. Although the number of individuals sincerely espousing solipsism has been small, it is not uncommon for one philosopher to accuse another's arguments of entailing solipsism as an unwanted consequence, in a kind of reductio ad absurdum. In the history of philosophy, solipsism has served as a skeptical hypothesis.",
"title": "Philosophical individualism"
},
{
"paragraph_id": 49,
"text": "The doctrine of economic individualism holds that each individual should be allowed autonomy in making his or her own economic decisions as opposed to those decisions being made by the community, the corporation or the state for him or her.",
"title": "Economic individualism"
},
{
"paragraph_id": 50,
"text": "Liberalism is a political ideology that developed in the 19th century in the Americas, England, France and Western Europe. It followed earlier forms of liberalism in its commitment to personal freedom and popular government, but differed from earlier forms of liberalism in its commitment to classical economics and free markets.",
"title": "Economic individualism"
},
{
"paragraph_id": 51,
"text": "Notable liberals in the 19th century include Jean-Baptiste Say, Thomas Malthus and David Ricardo. Classical liberalism, sometimes also used as a label to refer to all forms of liberalism before the 20th century, was revived in the 20th century by Ludwig von Mises and Friedrich Hayek and further developed by Milton Friedman, Robert Nozick, Loren Lomasky and Jan Narveson.",
"title": "Economic individualism"
},
{
"paragraph_id": 52,
"text": "Libertarianism upholds liberty as a core principle. Libertarians seek to maximize autonomy and political freedom, emphasizing free association, freedom of choice, individualism and voluntary association. Libertarianism shares a skepticism of authority and state power, but libertarians diverge on the scope of their opposition to existing economic and political systems. Various schools of libertarian thought offer a range of views regarding the legitimate functions of state and private power, often calling for the restriction or dissolution of coercive social institutions. Different categorizations have been used to distinguish various forms of libertarianism. This is done to distinguish libertarian views on the nature of property and capital, usually along left–right or socialist–capitalist lines.",
"title": "Economic individualism"
},
{
"paragraph_id": 53,
"text": "Left-libertarianism represents several related yet distinct approaches to politics, society, culture and political and social theory which stress both individual and political freedom alongside social justice. Unlike right-libertarians, left-libertarians believe that neither claiming nor mixing one's labor with natural resources is enough to generate full private property rights, and maintain that natural resources (land, oil, gold, trees) ought to be held in some egalitarian manner, either unowned or owned collectively. Those left-libertarians who support property do so under different property norms and theories, or under the condition that recompense is offered to the local or global community.",
"title": "Economic individualism"
},
{
"paragraph_id": 54,
"text": "Related terms include egalitarian libertarianism, left-wing libertarianism, libertarianism, libertarian socialism, social libertarianism and socialist libertarianism. Left-libertarianism can refer generally to these related and overlapping schools of thought:",
"title": "Economic individualism"
},
{
"paragraph_id": 55,
"text": "Right-libertarianism represents either non-collectivist forms of libertarianism or a variety of different libertarian views that scholars label to the right of libertarianism such as libertarian conservatism. Related terms include conservative libertarianism, libertarian capitalism and right-wing libertarianism. In the mid-20th century, right-libertarian ideologies such as anarcho-capitalism and minarchism co-opted the term libertarian to advocate laissez-faire capitalism and strong private property rights such as in land, infrastructure and natural resources. The latter is the dominant form of libertarianism in the United States, where it advocates civil liberties, natural law, free-market capitalism and a major reversal of the modern welfare state.",
"title": "Economic individualism"
},
{
"paragraph_id": 56,
"text": "With regard to economic questions within individualist socialist schools such as individualist anarchism, there are adherents to mutualism (Pierre Joseph Proudhon, Émile Armand and early Benjamin Tucker); natural rights positions (early Benjamin Tucker, Lysander Spooner and Josiah Warren); and egoistic disrespect for \"ghosts\" such as private property and markets (Max Stirner, John Henry Mackay, Lev Chernyi, later Benjamin Tucker, Renzo Novatore and illegalism). Contemporary individualist anarchist Kevin Carson characterizes American individualist anarchism saying that \"[u]nlike the rest of the socialist movement, the individualist anarchists believed that the natural wage of labor in a free market was its product, and that economic exploitation could only take place when capitalists and landlords harnessed the power of the state in their interests. Thus, individualist anarchism was an alternative both to the increasing statism of the mainstream socialist movement, and to a classical liberal movement that was moving toward a mere apologetic for the power of big business.\"",
"title": "Economic individualism"
},
{
"paragraph_id": 57,
"text": "Libertarian socialism, sometimes dubbed left-libertarianism and socialist libertarianism, is an anti-authoritarian, anti-statist and libertarian tradition within the socialist movement that rejects the state socialist conception of socialism as a statist form where the state retains centralized control of the economy. Libertarian socialists criticize wage slavery relationships within the workplace, emphasizing workers' self-management of the workplace and decentralized structures of political organization.",
"title": "Economic individualism"
},
{
"paragraph_id": 58,
"text": "Libertarian socialism asserts that a society based on freedom and justice can be achieved through abolishing authoritarian institutions that control certain means of production and subordinate the majority to an owning class or political and economic elite. Libertarian socialists advocate for decentralized structures based on direct democracy and federal or confederal associations such as libertarian municipalism, citizens' assemblies, trade unions and workers' councils.",
"title": "Economic individualism"
},
{
"paragraph_id": 59,
"text": "All of this is generally done within a general call for liberty and free association through the identification, criticism and practical dismantling of illegitimate authority in all aspects of human life. Within the larger socialist movement, libertarian socialism seeks to distinguish itself from Leninism and social democracy.",
"title": "Economic individualism"
},
{
"paragraph_id": 60,
"text": "Past and present currents and movements commonly described as libertarian socialist include anarchism (especially anarchist schools of thought such as anarcho-communism, anarcho-syndicalism, collectivist anarchism, green anarchism, individualist anarchism, mutualism, and social anarchism) as well as communalism, some forms of democratic socialism, guild socialism, libertarian Marxism (autonomism, council communism, left communism, and Luxemburgism, among others), participism, revolutionary syndicalism and some versions of utopian socialism.",
"title": "Economic individualism"
},
{
"paragraph_id": 61,
"text": "Mutualism is an anarchist school of thought which can be traced to the writings of Pierre-Joseph Proudhon, who envisioned a socialist society where each person possess a means of production, either individually or collectively, with trade representing equivalent amounts of labor in the free market. Integral to the scheme was the establishment of a mutual-credit bank which would lend to producers at a minimal interest rate only high enough to cover the costs of administration. Mutualism is based on a labor theory of value which holds that when labor or its product is sold, it ought to receive goods or services in exchange embodying \"the amount of labor necessary to produce an article of exactly similar and equal utility\" and that receiving anything less would be considered exploitation, theft of labor, or usury.",
"title": "Economic individualism"
},
{
"paragraph_id": 62,
"text": "Plato emphasized that individuals must adhere to laws and perform duties while declining to grant individuals rights to limit or reject state interference in their lives.",
"title": "Criticisms"
},
{
"paragraph_id": 63,
"text": "German philosopher Georg Wilhelm Friedrich Hegel criticized individualism by claiming that human self-consciousness relies on recognition from others, therefore embracing a holistic view and rejecting the idea of the world as a collection of atomized individuals.",
"title": "Criticisms"
},
{
"paragraph_id": 64,
"text": "Fascists believe that the liberal emphasis on individual freedom produces national divisiveness.",
"title": "Criticisms"
},
{
"paragraph_id": 65,
"text": "The anarchist writer and bohemian Oscar Wilde wrote in his famous essay The Soul of Man under Socialism that \"Art is individualism, and individualism is a disturbing and disintegrating force. There lies its immense value. For what it seeks is to disturb monotony of type, slavery of custom, tyranny of habit, and the reduction of man to the level of a machine.\" For anarchist historian George Woodcock, \"Wilde's aim in The Soul of Man under Socialism is to seek the society most favorable to the artist, [...] for Wilde art is the supreme end, containing within itself enlightenment and regeneration, to which all else in society must be subordinated. [...] Wilde represents the anarchist as aesthete.\" In this way, individualism has been used to denote a personality with a strong tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors.",
"title": "Other views"
},
{
"paragraph_id": 66,
"text": "Anarchist writer Murray Bookchin describes a lot of individualist anarchists as people who \"expressed their opposition in uniquely personal forms, especially in fiery tracts, outrageous behavior, and aberrant lifestyles in the cultural ghettos of fin de siècle New York, Paris, and London. As a credo, individualist anarchism remained largely a bohemian lifestyle, most conspicuous in its demands for sexual freedom ('free love') and enamored of innovations in art, behavior, and clothing.\"",
"title": "Other views"
},
{
"paragraph_id": 67,
"text": "In relation to this view of individuality, French individualist anarchist Émile Armand advocated egoistical denial of social conventions and dogmas to live in accord to one's own ways and desires in daily life since he emphasized anarchism as a way of life and practice. In this way, he opined that \"the anarchist individualist tends to reproduce himself, to perpetuate his spirit in other individuals who will share his views and who will make it possible for a state of affairs to be established from which authoritarianism has been banished. It is this desire, this will, not only to live, but also to reproduce oneself, which we shall call 'activity.'\"",
"title": "Other views"
},
{
"paragraph_id": 68,
"text": "In the book Imperfect Garden: The Legacy of Humanism, humanist philosopher Tzvetan Todorov identifies individualism as an important current of socio-political thought within modernity and as examples of it he mentions Michel de Montaigne, François de La Rochefoucauld, Marquis de Sade, and Charles Baudelaire. In La Rochefoucauld, he identifies a tendency similar to stoicism in which \"the honest person works his being in the manner of a sculptor who searches the liberation of the forms which are inside a block of marble, to extract the truth of that matter.\" In Baudelaire, he finds the dandy trait in which one searches to cultivate \"the idea of beauty within oneself, of satisfying one's passions of feeling and thinking.\"",
"title": "Other views"
},
{
"paragraph_id": 69,
"text": "The Russian-American poet Joseph Brodsky once wrote that \"[t]he surest defense against Evil is extreme individualism, originality of thinking, whimsicality, even – if you will – eccentricity. That is, something that can't be feigned, faked, imitated; something even a seasoned imposter couldn't be happy with.\" Ralph Waldo Emerson famously declared that \"[w]hoso would be a man must be a nonconformist\" – a point of view developed at length in both the life and work of Henry David Thoreau. Equally memorable and influential on Walt Whitman is Emerson's idea that \"a foolish consistency is the hobgoblin of small minds, adored by little statesmen and philosophers and divines.\" Emerson opposed on principle the reliance on civil and religious social structures precisely because through them the individual approaches the divine second-hand, mediated by the once original experience of a genius from another age. According to Emerson, \"[an institution is the lengthened shadow of one man.\" To achieve this original relation, Emerson stated that one must \"[i]nsist on one's self; never imitate\", for if the relationship is secondary the connection is lost.",
"title": "Other views"
},
{
"paragraph_id": 70,
"text": "People in Western countries tend to be more individualistic than communitarian. The authors of one study proposed that this difference is due in part to the influence of the Catholic Church in the Middle Ages. They pointed specifically to its bans on incest, cousin marriage, adoption, and remarriage, and its promotion of the nuclear family over the extended family.",
"title": "Other views"
},
{
"paragraph_id": 71,
"text": "The Catholic Church teaches \"if we pray the Our Father sincerely, we leave individualism behind, because the love that we receive frees us ... our divisions and oppositions have to be overcome\" Many Catholics have believed Martin Luther and the Protestant Reformation were sources of individualism.",
"title": "Other views"
}
]
| Individualism is the moral stance, political philosophy, ideology and social outlook that emphasizes the intrinsic worth of the individual. Individualists promote realizing one's goals and desires, valuing independence and self-reliance, and advocating that the interests of the individual should gain precedence over the state or a social group, while opposing external interference upon one's own interests by society or institutions such as the government. Individualism makes the individual its focus, and so starts "with the fundamental premise that the human individual is of primary importance in the struggle for liberation". Individualism is often defined in contrast to totalitarianism, collectivism and more corporate social forms. Individualism has been used as a term denoting "[t]he quality of being an individual; individuality", related to possessing "[a]n individual characteristic; a quirk". Individualism is also associated with artistic and bohemian interests and lifestyles where there is a tendency towards self-creation and experimentation as opposed to tradition or popular mass opinions and behaviors It is also associated with humanist philosophical positions and ethics. | 2001-10-24T03:21:38Z | 2023-12-21T17:21:58Z | [
"Template:'",
"Template:Portal",
"Template:Cite SEP",
"Template:Anarchism sidebar",
"Template:Libertarianism sidebar",
"Template:Redirect",
"Template:Lang",
"Template:ISBN",
"Template:Cite news",
"Template:Political spectrum",
"Template:Quote box",
"Template:Webarchive",
"Template:Political ideologies",
"Template:Libertarianism",
"Template:Short description",
"Template:Cite encyclopedia",
"Template:Main",
"Template:Liberalism sidebar",
"Template:Snd",
"Template:Reflist",
"Template:Individualism sidebar",
"Template:Libertarian socialism sidebar",
"Template:Columns-list",
"Template:Cite book",
"Template:Cite web",
"Template:Cite journal",
"Template:Cite thesis",
"Template:Social and political philosophy",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Individualism |
15,187 | In vivo | Studies that are in vivo (Latin for "within the living"; often not italicized in English) are those in which the effects of various biological entities are tested on whole, living organisms or cells, usually animals, including humans, and plants, as opposed to a tissue extract or dead organism. This is not to be confused with experiments done in vitro ("within the glass"), i.e., in a laboratory environment using test tubes, Petri dishes, etc. Examples of investigations in vivo include: the pathogenesis of disease by comparing the effects of bacterial infection with the effects of purified bacterial toxins; the development of non-antibiotics, antiviral drugs, and new drugs generally; and new surgical procedures. Consequently, animal testing and clinical trials are major elements of in vivo research. In vivo testing is often employed over in vitro because it is better suited for observing the overall effects of an experiment on a living subject. In drug discovery, for example, verification of efficacy in vivo is crucial, because in vitro assays can sometimes yield misleading results with drug candidate molecules that are irrelevant in vivo (e.g., because such molecules cannot reach their site of in vivo action, for example as a result of rapid catabolism in the liver).
The English microbiologist Professor Harry Smith and his colleagues in the mid-1950s found that sterile filtrates of serum from animals infected with Bacillus anthracis were lethal for other animals, whereas extracts of culture fluid from the same organism grown in vitro were not. This discovery of anthrax toxin through the use of in vivo experiments had a major impact on studies of the pathogenesis of infectious disease.
The maxim in vivo veritas ("in a living thing [there is] truth") is a play on in vino veritas, ("in wine [there is] truth"), a well-known proverb.
In microbiology, in vivo is often used to refer to experimentation done in a whole organism, rather than in live isolated cells, for example, cultured cells derived from biopsies. In this situation, the more specific term is ex vivo. Once cells are disrupted and individual parts are tested or analyzed, this is known as in vitro.
According to Christopher Lipinski and Andrew Hopkins, "Whether the aim is to discover drugs or to gain knowledge of biological systems, the nature and properties of a chemical tool cannot be considered independently of the system it is to be tested in. Compounds that bind to isolated recombinant proteins are one thing; chemical tools that can perturb cell function another; and pharmacological agents that can be tolerated by a live organism and perturb its systems are yet another. If it were simple to ascertain the properties required to develop a lead discovered in vitro to one that is active in vivo, drug discovery would be as reliable as drug manufacturing." Studies on In vivo behavior, determined the formulations of set specific drugs and their habits in a Biorelevant (or Biological relevance) medium. | [
{
"paragraph_id": 0,
"text": "Studies that are in vivo (Latin for \"within the living\"; often not italicized in English) are those in which the effects of various biological entities are tested on whole, living organisms or cells, usually animals, including humans, and plants, as opposed to a tissue extract or dead organism. This is not to be confused with experiments done in vitro (\"within the glass\"), i.e., in a laboratory environment using test tubes, Petri dishes, etc. Examples of investigations in vivo include: the pathogenesis of disease by comparing the effects of bacterial infection with the effects of purified bacterial toxins; the development of non-antibiotics, antiviral drugs, and new drugs generally; and new surgical procedures. Consequently, animal testing and clinical trials are major elements of in vivo research. In vivo testing is often employed over in vitro because it is better suited for observing the overall effects of an experiment on a living subject. In drug discovery, for example, verification of efficacy in vivo is crucial, because in vitro assays can sometimes yield misleading results with drug candidate molecules that are irrelevant in vivo (e.g., because such molecules cannot reach their site of in vivo action, for example as a result of rapid catabolism in the liver).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The English microbiologist Professor Harry Smith and his colleagues in the mid-1950s found that sterile filtrates of serum from animals infected with Bacillus anthracis were lethal for other animals, whereas extracts of culture fluid from the same organism grown in vitro were not. This discovery of anthrax toxin through the use of in vivo experiments had a major impact on studies of the pathogenesis of infectious disease.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The maxim in vivo veritas (\"in a living thing [there is] truth\") is a play on in vino veritas, (\"in wine [there is] truth\"), a well-known proverb.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In microbiology, in vivo is often used to refer to experimentation done in a whole organism, rather than in live isolated cells, for example, cultured cells derived from biopsies. In this situation, the more specific term is ex vivo. Once cells are disrupted and individual parts are tested or analyzed, this is known as in vitro.",
"title": "In vivo vs. ex vivo research"
},
{
"paragraph_id": 4,
"text": "According to Christopher Lipinski and Andrew Hopkins, \"Whether the aim is to discover drugs or to gain knowledge of biological systems, the nature and properties of a chemical tool cannot be considered independently of the system it is to be tested in. Compounds that bind to isolated recombinant proteins are one thing; chemical tools that can perturb cell function another; and pharmacological agents that can be tolerated by a live organism and perturb its systems are yet another. If it were simple to ascertain the properties required to develop a lead discovered in vitro to one that is active in vivo, drug discovery would be as reliable as drug manufacturing.\" Studies on In vivo behavior, determined the formulations of set specific drugs and their habits in a Biorelevant (or Biological relevance) medium.",
"title": "Methods of use"
}
]
| Studies that are in vivo are those in which the effects of various biological entities are tested on whole, living organisms or cells, usually animals, including humans, and plants, as opposed to a tissue extract or dead organism. This is not to be confused with experiments done in vitro, i.e., in a laboratory environment using test tubes, Petri dishes, etc. Examples of investigations in vivo include: the pathogenesis of disease by comparing the effects of bacterial infection with the effects of purified bacterial toxins; the development of non-antibiotics, antiviral drugs, and new drugs generally; and new surgical procedures. Consequently, animal testing and clinical trials are major elements of in vivo research. In vivo testing is often employed over in vitro because it is better suited for observing the overall effects of an experiment on a living subject. In drug discovery, for example, verification of efficacy in vivo is crucial, because in vitro assays can sometimes yield misleading results with drug candidate molecules that are irrelevant in vivo. The English microbiologist Professor Harry Smith and his colleagues in the mid-1950s found that sterile filtrates of serum from animals infected with Bacillus anthracis were lethal for other animals, whereas extracts of culture fluid from the same organism grown in vitro were not. This discovery of anthrax toxin through the use of in vivo experiments had a major impact on studies of the pathogenesis of infectious disease. The maxim in vivo veritas is a play on in vino veritas,, a well-known proverb. | 2001-10-25T13:29:59Z | 2023-12-31T13:51:41Z | [
"Template:Wiktionary",
"Template:Cite journal",
"Template:Cite web",
"Template:Medical research studies",
"Template:Short description",
"Template:Other uses",
"Template:Italic title",
"Template:Reflist",
"Template:Citation",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/In_vivo |
15,188 | In vitro | In vitro (meaning in glass, or in the glass) studies are performed with microorganisms, cells, or biological molecules outside their normal biological context. Colloquially called "test-tube experiments", these studies in biology and its subdisciplines are traditionally done in labware such as test tubes, flasks, Petri dishes, and microtiter plates. Studies conducted using components of an organism that have been isolated from their usual biological surroundings permit a more detailed or more convenient analysis than can be done with whole organisms; however, results obtained from in vitro experiments may not fully or accurately predict the effects on a whole organism. In contrast to in vitro experiments, in vivo studies are those conducted in living organisms, including humans, known as clinical trials, and whole plants.
In vitro (Latin: in glass; often not italicized in English usage) studies are conducted using components of an organism that have been isolated from their usual biological surroundings, such as microorganisms, cells, or biological molecules. For example, microorganisms or cells can be studied in artificial culture media, and proteins can be examined in solutions. Colloquially called "test-tube experiments", these studies in biology, medicine, and their subdisciplines are traditionally done in test tubes, flasks, Petri dishes, etc. They now involve the full range of techniques used in molecular biology, such as the omics.
In contrast, studies conducted in living beings (microorganisms, animals, humans, or whole plants) are called in vivo.
Examples of in vitro studies include: the isolation, growth and identification of cells derived from multicellular organisms (in cell or tissue culture); subcellular components (e.g. mitochondria or ribosomes); cellular or subcellular extracts (e.g. wheat germ or reticulocyte extracts); purified molecules (such as proteins, DNA, or RNA); and the commercial production of antibiotics and other pharmaceutical products. Viruses, which only replicate in living cells, are studied in the laboratory in cell or tissue culture, and many animal virologists refer to such work as being in vitro to distinguish it from in vivo work in whole animals.
In vitro studies permit a species-specific, simpler, more convenient, and more detailed analysis than can be done with the whole organism. Just as studies in whole animals more and more replace human trials, so are in vitro studies replacing studies in whole animals.
Living organisms are extremely complex functional systems that are made up of, at a minimum, many tens of thousands of genes, protein molecules, RNA molecules, small organic compounds, inorganic ions, and complexes in an environment that is spatially organized by membranes, and in the case of multicellular organisms, organ systems. These myriad components interact with each other and with their environment in a way that processes food, removes waste, moves components to the correct location, and is responsive to signalling molecules, other organisms, light, sound, heat, taste, touch, and balance.
This complexity makes it difficult to identify the interactions between individual components and to explore their basic biological functions. In vitro work simplifies the system under study, so the investigator can focus on a small number of components.
For example, the identity of proteins of the immune system (e.g. antibodies), and the mechanism by which they recognize and bind to foreign antigens would remain very obscure if not for the extensive use of in vitro work to isolate the proteins, identify the cells and genes that produce them, study the physical properties of their interaction with antigens, and identify how those interactions lead to cellular signals that activate other components of the immune system.
Another advantage of in vitro methods is that human cells can be studied without "extrapolation" from an experimental animal's cellular response.
In vitro methods can be miniaturized and automated, yielding high-throughput screening methods for testing molecules in pharmacology or toxicology.
The primary disadvantage of in vitro experimental studies is that it may be challenging to extrapolate from the results of in vitro work back to the biology of the intact organism. Investigators doing in vitro work must be careful to avoid over-interpretation of their results, which can lead to erroneous conclusions about organismal and systems biology.
For example, scientists developing a new viral drug to treat an infection with a pathogenic virus (e.g., HIV-1) may find that a candidate drug functions to prevent viral replication in an in vitro setting (typically cell culture). However, before this drug is used in the clinic, it must progress through a series of in vivo trials to determine if it is safe and effective in intact organisms (typically small animals, primates, and humans in succession). Typically, most candidate drugs that are effective in vitro prove to be ineffective in vivo because of issues associated with delivery of the drug to the affected tissues, toxicity towards essential parts of the organism that were not represented in the initial in vitro studies, or other issues.
A method which could help decrease animal testing is the use of in vitro batteries, where several in vitro assays are compiled to cover multiple endpoints. Within developmental neurotoxicity and reproductive toxicity there are hopes for test batteries to become easy screening methods for prioritization for which chemicals to be risk assessed and in which order. Within ecotoxicology in vitro test batteries are already in use for regulatory purpose and for toxicological evaluation of chemicals. In vitro tests can also be combined with in vivo testing to make a in vitro in vivo test battery, for example for pharmaceutical testing.
Results obtained from in vitro experiments cannot usually be transposed, as is, to predict the reaction of an entire organism in vivo. Building a consistent and reliable extrapolation procedure from in vitro results to in vivo is therefore extremely important. Solutions include:
These two approaches are not incompatible; better in vitro systems provide better data to mathematical models. However, increasingly sophisticated in vitro experiments collect increasingly numerous, complex, and challenging data to integrate. Mathematical models, such as systems biology models, are much needed here.
In pharmacology, IVIVE can be used to approximate pharmacokinetics (PK) or pharmacodynamics (PD). Since the timing and intensity of effects on a given target depend on the concentration time course of candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ sensitivities can be completely different or even inverse of those observed on cells cultured and exposed in vitro. That indicates that extrapolating effects observed in vitro needs a quantitative model of in vivo PK. Physiologically based PK (PBPK) models are generally accepted to be central to the extrapolations.
In the case of early effects or those without intercellular communications, the same cellular exposure concentration is assumed to cause the same effects, both qualitatively and quantitatively, in vitro and in vivo. In these conditions, developing a simple PD model of the dose–response relationship observed in vitro, and transposing it without changes to predict in vivo effects is not enough. | [
{
"paragraph_id": 0,
"text": "In vitro (meaning in glass, or in the glass) studies are performed with microorganisms, cells, or biological molecules outside their normal biological context. Colloquially called \"test-tube experiments\", these studies in biology and its subdisciplines are traditionally done in labware such as test tubes, flasks, Petri dishes, and microtiter plates. Studies conducted using components of an organism that have been isolated from their usual biological surroundings permit a more detailed or more convenient analysis than can be done with whole organisms; however, results obtained from in vitro experiments may not fully or accurately predict the effects on a whole organism. In contrast to in vitro experiments, in vivo studies are those conducted in living organisms, including humans, known as clinical trials, and whole plants.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In vitro (Latin: in glass; often not italicized in English usage) studies are conducted using components of an organism that have been isolated from their usual biological surroundings, such as microorganisms, cells, or biological molecules. For example, microorganisms or cells can be studied in artificial culture media, and proteins can be examined in solutions. Colloquially called \"test-tube experiments\", these studies in biology, medicine, and their subdisciplines are traditionally done in test tubes, flasks, Petri dishes, etc. They now involve the full range of techniques used in molecular biology, such as the omics.",
"title": "Definition"
},
{
"paragraph_id": 2,
"text": "In contrast, studies conducted in living beings (microorganisms, animals, humans, or whole plants) are called in vivo.",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "Examples of in vitro studies include: the isolation, growth and identification of cells derived from multicellular organisms (in cell or tissue culture); subcellular components (e.g. mitochondria or ribosomes); cellular or subcellular extracts (e.g. wheat germ or reticulocyte extracts); purified molecules (such as proteins, DNA, or RNA); and the commercial production of antibiotics and other pharmaceutical products. Viruses, which only replicate in living cells, are studied in the laboratory in cell or tissue culture, and many animal virologists refer to such work as being in vitro to distinguish it from in vivo work in whole animals.",
"title": "Examples"
},
{
"paragraph_id": 4,
"text": "In vitro studies permit a species-specific, simpler, more convenient, and more detailed analysis than can be done with the whole organism. Just as studies in whole animals more and more replace human trials, so are in vitro studies replacing studies in whole animals.",
"title": "Advantages"
},
{
"paragraph_id": 5,
"text": "Living organisms are extremely complex functional systems that are made up of, at a minimum, many tens of thousands of genes, protein molecules, RNA molecules, small organic compounds, inorganic ions, and complexes in an environment that is spatially organized by membranes, and in the case of multicellular organisms, organ systems. These myriad components interact with each other and with their environment in a way that processes food, removes waste, moves components to the correct location, and is responsive to signalling molecules, other organisms, light, sound, heat, taste, touch, and balance.",
"title": "Advantages"
},
{
"paragraph_id": 6,
"text": "This complexity makes it difficult to identify the interactions between individual components and to explore their basic biological functions. In vitro work simplifies the system under study, so the investigator can focus on a small number of components.",
"title": "Advantages"
},
{
"paragraph_id": 7,
"text": "For example, the identity of proteins of the immune system (e.g. antibodies), and the mechanism by which they recognize and bind to foreign antigens would remain very obscure if not for the extensive use of in vitro work to isolate the proteins, identify the cells and genes that produce them, study the physical properties of their interaction with antigens, and identify how those interactions lead to cellular signals that activate other components of the immune system.",
"title": "Advantages"
},
{
"paragraph_id": 8,
"text": "Another advantage of in vitro methods is that human cells can be studied without \"extrapolation\" from an experimental animal's cellular response.",
"title": "Advantages"
},
{
"paragraph_id": 9,
"text": "In vitro methods can be miniaturized and automated, yielding high-throughput screening methods for testing molecules in pharmacology or toxicology.",
"title": "Advantages"
},
{
"paragraph_id": 10,
"text": "The primary disadvantage of in vitro experimental studies is that it may be challenging to extrapolate from the results of in vitro work back to the biology of the intact organism. Investigators doing in vitro work must be careful to avoid over-interpretation of their results, which can lead to erroneous conclusions about organismal and systems biology.",
"title": "Disadvantages"
},
{
"paragraph_id": 11,
"text": "For example, scientists developing a new viral drug to treat an infection with a pathogenic virus (e.g., HIV-1) may find that a candidate drug functions to prevent viral replication in an in vitro setting (typically cell culture). However, before this drug is used in the clinic, it must progress through a series of in vivo trials to determine if it is safe and effective in intact organisms (typically small animals, primates, and humans in succession). Typically, most candidate drugs that are effective in vitro prove to be ineffective in vivo because of issues associated with delivery of the drug to the affected tissues, toxicity towards essential parts of the organism that were not represented in the initial in vitro studies, or other issues.",
"title": "Disadvantages"
},
{
"paragraph_id": 12,
"text": "A method which could help decrease animal testing is the use of in vitro batteries, where several in vitro assays are compiled to cover multiple endpoints. Within developmental neurotoxicity and reproductive toxicity there are hopes for test batteries to become easy screening methods for prioritization for which chemicals to be risk assessed and in which order. Within ecotoxicology in vitro test batteries are already in use for regulatory purpose and for toxicological evaluation of chemicals. In vitro tests can also be combined with in vivo testing to make a in vitro in vivo test battery, for example for pharmaceutical testing.",
"title": "In vitro test batteries"
},
{
"paragraph_id": 13,
"text": "Results obtained from in vitro experiments cannot usually be transposed, as is, to predict the reaction of an entire organism in vivo. Building a consistent and reliable extrapolation procedure from in vitro results to in vivo is therefore extremely important. Solutions include:",
"title": "In vitro to in vivo extrapolation"
},
{
"paragraph_id": 14,
"text": "These two approaches are not incompatible; better in vitro systems provide better data to mathematical models. However, increasingly sophisticated in vitro experiments collect increasingly numerous, complex, and challenging data to integrate. Mathematical models, such as systems biology models, are much needed here.",
"title": "In vitro to in vivo extrapolation"
},
{
"paragraph_id": 15,
"text": "In pharmacology, IVIVE can be used to approximate pharmacokinetics (PK) or pharmacodynamics (PD). Since the timing and intensity of effects on a given target depend on the concentration time course of candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ sensitivities can be completely different or even inverse of those observed on cells cultured and exposed in vitro. That indicates that extrapolating effects observed in vitro needs a quantitative model of in vivo PK. Physiologically based PK (PBPK) models are generally accepted to be central to the extrapolations.",
"title": "In vitro to in vivo extrapolation"
},
{
"paragraph_id": 16,
"text": "In the case of early effects or those without intercellular communications, the same cellular exposure concentration is assumed to cause the same effects, both qualitatively and quantitatively, in vitro and in vivo. In these conditions, developing a simple PD model of the dose–response relationship observed in vitro, and transposing it without changes to predict in vivo effects is not enough.",
"title": "In vitro to in vivo extrapolation"
}
]
| In vitro studies are performed with microorganisms, cells, or biological molecules outside their normal biological context. Colloquially called "test-tube experiments", these studies in biology and its subdisciplines are traditionally done in labware such as test tubes, flasks, Petri dishes, and microtiter plates. Studies conducted using components of an organism that have been isolated from their usual biological surroundings permit a more detailed or more convenient analysis than can be done with whole organisms; however, results obtained from in vitro experiments may not fully or accurately predict the effects on a whole organism. In contrast to in vitro experiments, in vivo studies are those conducted in living organisms, including humans, known as clinical trials, and whole plants. | 2001-10-25T13:32:42Z | 2023-12-18T07:53:35Z | [
"Template:Lang-la",
"Template:Citation needed",
"Template:Reflist",
"Template:Commons category-inline",
"Template:Short description",
"Template:Main",
"Template:Cite book",
"Template:About",
"Template:Cite web",
"Template:Citation",
"Template:Wiktionary",
"Template:Lang",
"Template:Cite journal",
"Template:Medical research studies",
"Template:Italic title"
]
| https://en.wikipedia.org/wiki/In_vitro |
15,189 | IEEE 754-1985 | IEEE 754-1985 is a historic industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.
IEEE 754-1985 represents numbers in binary, providing definitions for four levels of precision, of which the two most commonly used are:
The standard also defines representations for positive and negative infinity, a "negative zero", five exceptions to handle invalid results like division by zero, special values called NaNs for representing those exceptions, denormal numbers to represent numbers smaller than shown above, and four rounding modes.
Floating-point numbers in IEEE 754 format consist of three fields: a sign bit, a biased exponent, and a fraction. The following example illustrates the meaning of each.
The decimal number 0.1562510 represented in binary is 0.001012 (that is, 1/8 + 1/32). (Subscripts indicate the number base.) Analogous to scientific notation, where numbers are written to have a single non-zero digit to the left of the decimal point, we rewrite this number so it has a single 1 bit to the left of the "binary point". We simply multiply by the appropriate power of 2 to compensate for shifting the bits left by three positions:
Now we can read off the fraction and the exponent: the fraction is .012 and the exponent is −3.
As illustrated in the pictures, the three fields in the IEEE 754 representation of this number are:
IEEE 754 adds a bias to the exponent so that numbers can in many cases be compared conveniently by the same hardware that compares signed 2's-complement integers. Using a biased exponent, the lesser of two positive floating-point numbers will come out "less than" the greater following the same ordering as for sign and magnitude integers. If two floating-point numbers have different signs, the sign-and-magnitude comparison also works with biased exponents. However, if both biased-exponent floating-point numbers are negative, then the ordering must be reversed. If the exponent were represented as, say, a 2's-complement number, comparison to see which of two numbers is greater would not be as convenient.
The leading 1 bit is omitted since all numbers except zero start with a leading 1; the leading 1 is implicit and doesn't actually need to be stored which gives an extra bit of precision for "free."
The number zero is represented specially:
The number representations described above are called normalized, meaning that the implicit leading binary digit is a 1. To reduce the loss of precision when an underflow occurs, IEEE 754 includes the ability to represent fractions smaller than are possible in the normalized representation, by making the implicit leading digit a 0. Such numbers are called denormal. They don't include as many significant digits as a normalized number, but they enable a gradual loss of precision when the result of an operation is not exactly zero but is too close to zero to be represented by a normalized number.
A denormal number is represented with a biased exponent of all 0 bits, which represents an exponent of −126 in single precision (not −127), or −1022 in double precision (not −1023). In contrast, the smallest biased exponent representing a normal number is 1 (see examples below).
The biased-exponent field is filled with all 1 bits to indicate either infinity or an invalid result of a computation.
Positive and negative infinity are represented thus:
Some operations of floating-point arithmetic are invalid, such as taking the square root of a negative number. The act of reaching an invalid result is called a floating-point exception. An exceptional result is represented by a special code called a NaN, for "Not a Number". All NaNs in IEEE 754-1985 have this format:
Precision is defined as the minimum difference between two successive mantissa representations; thus it is a function only in the mantissa; while the gap is defined as the difference between two successive numbers.
Single-precision numbers occupy 32 bits. In single precision:
Some example range and gap values for given exponents in single precision:
As an example, 16,777,217 cannot be encoded as a 32-bit float as it will be rounded to 16,777,216. This shows why floating point arithmetic is unsuitable for accounting software. However, all integers within the representable range that are a power of 2 can be stored in a 32-bit float without rounding.
Double-precision numbers occupy 64 bits. In double precision:
Some example range and gap values for given exponents in double precision:
The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors: the standard only specifies minimum precision and exponent requirements for such formats. The x87 80-bit extended format is the most commonly implemented extended format that meets these requirements.
Here are some examples of single-precision IEEE 754 representations:
Every possible bit combination is either a NaN or a number with a unique value in the affinely extended real number system with its associated order, except for the two combinations of bits for negative zero and positive zero, which sometimes require special attention (see below). The binary representation has the special property that, excluding NaNs, any two numbers can be compared as sign and magnitude integers (endianness issues apply). When comparing as 2's-complement integers: If the sign bits differ, the negative number precedes the positive number, so 2's complement gives the correct result (except that negative zero and positive zero should be considered equal). If both values are positive, the 2's complement comparison again gives the correct result. Otherwise (two negative numbers), the correct FP ordering is the opposite of the 2's complement ordering.
Rounding errors inherent to floating point calculations may limit the use of comparisons for checking the exact equality of results. Choosing an acceptable range is a complex topic. A common technique is to use a comparison epsilon value to perform approximate comparisons. Depending on how lenient the comparisons are, common values include 1e-6 or 1e-5 for single-precision, and 1e-14 for double-precision. Another common technique is ULP, which checks what the difference is in the last place digits, effectively checking how many steps away the two values are.
Although negative zero and positive zero are generally considered equal for comparison purposes, some programming language relational operators and similar constructs treat them as distinct. According to the Java Language Specification, comparison and equality operators treat them as equal, but Math.min() and Math.max() distinguish them (officially starting with Java version 1.1 but actually with 1.1.1), as do the comparison methods equals(), compareTo() and even compare() of classes Float and Double.
The IEEE standard has four different rounding modes; the first is the default; the others are called directed roundings.
The IEEE standard employs (and extends) the affinely extended real number system, with separate positive and negative infinities. During drafting, there was a proposal for the standard to incorporate the projectively extended real number system, with a single unsigned infinity, by providing programmers with a mode selection option. In the interest of reducing the complexity of the final standard, the projective mode was dropped, however. The Intel 8087 and Intel 80287 floating point co-processors both support this projective mode.
The following functions must be provided:
In 1976, Intel was starting the development of a floating-point coprocessor. Intel hoped to be able to sell a chip containing good implementations of all the operations found in the widely varying maths software libraries.
John Palmer, who managed the project, believed the effort should be backed by a standard unifying floating point operations across disparate processors. He contacted William Kahan of the University of California, who had helped improve the accuracy of Hewlett-Packard's calculators. Kahan suggested that Intel use the floating point of Digital Equipment Corporation's (DEC) VAX. The first VAX, the VAX-11/780 had just come out in late 1977, and its floating point was highly regarded. However, seeking to market their chip to the broadest possible market, Intel wanted the best floating point possible, and Kahan went on to draw up specifications. Kahan initially recommended that the floating point base be decimal but the hardware design of the coprocessor was too far along to make that change.
The work within Intel worried other vendors, who set up a standardization effort to ensure a "level playing field". Kahan attended the second IEEE 754 standards working group meeting, held in November 1977. He subsequently received permission from Intel to put forward a draft proposal based on his work for their coprocessor; he was allowed to explain details of the format and its rationale, but not anything related to Intel's implementation architecture. The draft was co-written with Jerome Coonen and Harold Stone, and was initially known as the "Kahan-Coonen-Stone proposal" or "K-C-S format".
As an 8-bit exponent was not wide enough for some operations desired for double-precision numbers, e.g. to store the product of two 32-bit numbers, both Kahan's proposal and a counter-proposal by DEC therefore used 11 bits, like the time-tested 60-bit floating-point format of the CDC 6600 from 1965. Kahan's proposal also provided for infinities, which are useful when dealing with division-by-zero conditions; not-a-number values, which are useful when dealing with invalid operations; denormal numbers, which help mitigate problems caused by underflow; and a better balanced exponent bias, which can help avoid overflow and underflow when taking the reciprocal of a number.
Even before it was approved, the draft standard had been implemented by a number of manufacturers. The Intel 8087, which was announced in 1980, was the first chip to implement the draft standard.
In 1980, the Intel 8087 chip was already released, but DEC remained opposed, to denormal numbers in particular, because of performance concerns and since it would give DEC a competitive advantage to standardise on DEC's format.
The arguments over gradual underflow lasted until 1981 when an expert hired by DEC to assess it sided against the dissenters. DEC had the study done in order to demonstrate that gradual underflow was a bad idea, but the study concluded the opposite, and DEC gave in. In 1985, the standard was ratified, but it had already become the de facto standard a year earlier, implemented by many manufacturers. | [
{
"paragraph_id": 0,
"text": "IEEE 754-1985 is a historic industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.",
"title": ""
},
{
"paragraph_id": 1,
"text": "IEEE 754-1985 represents numbers in binary, providing definitions for four levels of precision, of which the two most commonly used are:",
"title": ""
},
{
"paragraph_id": 2,
"text": "The standard also defines representations for positive and negative infinity, a \"negative zero\", five exceptions to handle invalid results like division by zero, special values called NaNs for representing those exceptions, denormal numbers to represent numbers smaller than shown above, and four rounding modes.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Floating-point numbers in IEEE 754 format consist of three fields: a sign bit, a biased exponent, and a fraction. The following example illustrates the meaning of each.",
"title": "Representation of numbers"
},
{
"paragraph_id": 4,
"text": "The decimal number 0.1562510 represented in binary is 0.001012 (that is, 1/8 + 1/32). (Subscripts indicate the number base.) Analogous to scientific notation, where numbers are written to have a single non-zero digit to the left of the decimal point, we rewrite this number so it has a single 1 bit to the left of the \"binary point\". We simply multiply by the appropriate power of 2 to compensate for shifting the bits left by three positions:",
"title": "Representation of numbers"
},
{
"paragraph_id": 5,
"text": "Now we can read off the fraction and the exponent: the fraction is .012 and the exponent is −3.",
"title": "Representation of numbers"
},
{
"paragraph_id": 6,
"text": "As illustrated in the pictures, the three fields in the IEEE 754 representation of this number are:",
"title": "Representation of numbers"
},
{
"paragraph_id": 7,
"text": "IEEE 754 adds a bias to the exponent so that numbers can in many cases be compared conveniently by the same hardware that compares signed 2's-complement integers. Using a biased exponent, the lesser of two positive floating-point numbers will come out \"less than\" the greater following the same ordering as for sign and magnitude integers. If two floating-point numbers have different signs, the sign-and-magnitude comparison also works with biased exponents. However, if both biased-exponent floating-point numbers are negative, then the ordering must be reversed. If the exponent were represented as, say, a 2's-complement number, comparison to see which of two numbers is greater would not be as convenient.",
"title": "Representation of numbers"
},
{
"paragraph_id": 8,
"text": "The leading 1 bit is omitted since all numbers except zero start with a leading 1; the leading 1 is implicit and doesn't actually need to be stored which gives an extra bit of precision for \"free.\"",
"title": "Representation of numbers"
},
{
"paragraph_id": 9,
"text": "The number zero is represented specially:",
"title": "Representation of numbers"
},
{
"paragraph_id": 10,
"text": "The number representations described above are called normalized, meaning that the implicit leading binary digit is a 1. To reduce the loss of precision when an underflow occurs, IEEE 754 includes the ability to represent fractions smaller than are possible in the normalized representation, by making the implicit leading digit a 0. Such numbers are called denormal. They don't include as many significant digits as a normalized number, but they enable a gradual loss of precision when the result of an operation is not exactly zero but is too close to zero to be represented by a normalized number.",
"title": "Representation of numbers"
},
{
"paragraph_id": 11,
"text": "A denormal number is represented with a biased exponent of all 0 bits, which represents an exponent of −126 in single precision (not −127), or −1022 in double precision (not −1023). In contrast, the smallest biased exponent representing a normal number is 1 (see examples below).",
"title": "Representation of numbers"
},
{
"paragraph_id": 12,
"text": "The biased-exponent field is filled with all 1 bits to indicate either infinity or an invalid result of a computation.",
"title": "Representation of non-numbers"
},
{
"paragraph_id": 13,
"text": "Positive and negative infinity are represented thus:",
"title": "Representation of non-numbers"
},
{
"paragraph_id": 14,
"text": "Some operations of floating-point arithmetic are invalid, such as taking the square root of a negative number. The act of reaching an invalid result is called a floating-point exception. An exceptional result is represented by a special code called a NaN, for \"Not a Number\". All NaNs in IEEE 754-1985 have this format:",
"title": "Representation of non-numbers"
},
{
"paragraph_id": 15,
"text": "Precision is defined as the minimum difference between two successive mantissa representations; thus it is a function only in the mantissa; while the gap is defined as the difference between two successive numbers.",
"title": "Range and precision"
},
{
"paragraph_id": 16,
"text": "Single-precision numbers occupy 32 bits. In single precision:",
"title": "Range and precision"
},
{
"paragraph_id": 17,
"text": "Some example range and gap values for given exponents in single precision:",
"title": "Range and precision"
},
{
"paragraph_id": 18,
"text": "As an example, 16,777,217 cannot be encoded as a 32-bit float as it will be rounded to 16,777,216. This shows why floating point arithmetic is unsuitable for accounting software. However, all integers within the representable range that are a power of 2 can be stored in a 32-bit float without rounding.",
"title": "Range and precision"
},
{
"paragraph_id": 19,
"text": "Double-precision numbers occupy 64 bits. In double precision:",
"title": "Range and precision"
},
{
"paragraph_id": 20,
"text": "Some example range and gap values for given exponents in double precision:",
"title": "Range and precision"
},
{
"paragraph_id": 21,
"text": "The standard also recommends extended format(s) to be used to perform internal computations at a higher precision than that required for the final result, to minimise round-off errors: the standard only specifies minimum precision and exponent requirements for such formats. The x87 80-bit extended format is the most commonly implemented extended format that meets these requirements.",
"title": "Range and precision"
},
{
"paragraph_id": 22,
"text": "Here are some examples of single-precision IEEE 754 representations:",
"title": "Examples"
},
{
"paragraph_id": 23,
"text": "Every possible bit combination is either a NaN or a number with a unique value in the affinely extended real number system with its associated order, except for the two combinations of bits for negative zero and positive zero, which sometimes require special attention (see below). The binary representation has the special property that, excluding NaNs, any two numbers can be compared as sign and magnitude integers (endianness issues apply). When comparing as 2's-complement integers: If the sign bits differ, the negative number precedes the positive number, so 2's complement gives the correct result (except that negative zero and positive zero should be considered equal). If both values are positive, the 2's complement comparison again gives the correct result. Otherwise (two negative numbers), the correct FP ordering is the opposite of the 2's complement ordering.",
"title": "Comparing floating-point numbers"
},
{
"paragraph_id": 24,
"text": "Rounding errors inherent to floating point calculations may limit the use of comparisons for checking the exact equality of results. Choosing an acceptable range is a complex topic. A common technique is to use a comparison epsilon value to perform approximate comparisons. Depending on how lenient the comparisons are, common values include 1e-6 or 1e-5 for single-precision, and 1e-14 for double-precision. Another common technique is ULP, which checks what the difference is in the last place digits, effectively checking how many steps away the two values are.",
"title": "Comparing floating-point numbers"
},
{
"paragraph_id": 25,
"text": "Although negative zero and positive zero are generally considered equal for comparison purposes, some programming language relational operators and similar constructs treat them as distinct. According to the Java Language Specification, comparison and equality operators treat them as equal, but Math.min() and Math.max() distinguish them (officially starting with Java version 1.1 but actually with 1.1.1), as do the comparison methods equals(), compareTo() and even compare() of classes Float and Double.",
"title": "Comparing floating-point numbers"
},
{
"paragraph_id": 26,
"text": "The IEEE standard has four different rounding modes; the first is the default; the others are called directed roundings.",
"title": "Rounding floating-point numbers"
},
{
"paragraph_id": 27,
"text": "The IEEE standard employs (and extends) the affinely extended real number system, with separate positive and negative infinities. During drafting, there was a proposal for the standard to incorporate the projectively extended real number system, with a single unsigned infinity, by providing programmers with a mode selection option. In the interest of reducing the complexity of the final standard, the projective mode was dropped, however. The Intel 8087 and Intel 80287 floating point co-processors both support this projective mode.",
"title": "Extending the real numbers"
},
{
"paragraph_id": 28,
"text": "The following functions must be provided:",
"title": "Functions and predicates"
},
{
"paragraph_id": 29,
"text": "In 1976, Intel was starting the development of a floating-point coprocessor. Intel hoped to be able to sell a chip containing good implementations of all the operations found in the widely varying maths software libraries.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "John Palmer, who managed the project, believed the effort should be backed by a standard unifying floating point operations across disparate processors. He contacted William Kahan of the University of California, who had helped improve the accuracy of Hewlett-Packard's calculators. Kahan suggested that Intel use the floating point of Digital Equipment Corporation's (DEC) VAX. The first VAX, the VAX-11/780 had just come out in late 1977, and its floating point was highly regarded. However, seeking to market their chip to the broadest possible market, Intel wanted the best floating point possible, and Kahan went on to draw up specifications. Kahan initially recommended that the floating point base be decimal but the hardware design of the coprocessor was too far along to make that change.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The work within Intel worried other vendors, who set up a standardization effort to ensure a \"level playing field\". Kahan attended the second IEEE 754 standards working group meeting, held in November 1977. He subsequently received permission from Intel to put forward a draft proposal based on his work for their coprocessor; he was allowed to explain details of the format and its rationale, but not anything related to Intel's implementation architecture. The draft was co-written with Jerome Coonen and Harold Stone, and was initially known as the \"Kahan-Coonen-Stone proposal\" or \"K-C-S format\".",
"title": "History"
},
{
"paragraph_id": 32,
"text": "As an 8-bit exponent was not wide enough for some operations desired for double-precision numbers, e.g. to store the product of two 32-bit numbers, both Kahan's proposal and a counter-proposal by DEC therefore used 11 bits, like the time-tested 60-bit floating-point format of the CDC 6600 from 1965. Kahan's proposal also provided for infinities, which are useful when dealing with division-by-zero conditions; not-a-number values, which are useful when dealing with invalid operations; denormal numbers, which help mitigate problems caused by underflow; and a better balanced exponent bias, which can help avoid overflow and underflow when taking the reciprocal of a number.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Even before it was approved, the draft standard had been implemented by a number of manufacturers. The Intel 8087, which was announced in 1980, was the first chip to implement the draft standard.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1980, the Intel 8087 chip was already released, but DEC remained opposed, to denormal numbers in particular, because of performance concerns and since it would give DEC a competitive advantage to standardise on DEC's format.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The arguments over gradual underflow lasted until 1981 when an expert hired by DEC to assess it sided against the dissenters. DEC had the study done in order to demonstrate that gradual underflow was a bad idea, but the study concluded the opposite, and DEC gave in. In 1985, the standard was ratified, but it had already become the de facto standard a year earlier, implemented by many manufacturers.",
"title": "History"
}
]
| IEEE 754-1985 is a historic industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087. IEEE 754-1985 represents numbers in binary, providing definitions for four levels of precision, of which the two most commonly used are: The standard also defines representations for positive and negative infinity, a "negative zero", five exceptions to handle invalid results like division by zero, special values called NaNs for representing those exceptions, denormal numbers to represent numbers smaller than shown above, and four rounding modes. | 2001-10-25T15:08:24Z | 2023-12-07T14:37:40Z | [
"Template:Short description",
"Template:See also",
"Template:E",
"Template:Math",
"Template:Clear",
"Template:Cite journal",
"Template:Notelist",
"Template:Cite web",
"Template:IEEE standards",
"Template:Efn",
"Template:Anchor",
"Template:Reflist",
"Template:Citation",
"Template:Unreliable source?",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/IEEE_754-1985 |
15,190 | Intel 80186 | The Intel 80186, also known as the iAPX 186, or just 186, is a microprocessor and microcontroller introduced in 1982. It was based on the Intel 8086 and, like it, had a 16-bit external data bus multiplexed with a 20-bit address bus. The 80188 variant, with an 8-bit external data bus was also available.
The 80186 series was generally intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of integrated circuits required, it included features such as clock generator, interrupt controller, timers, wait state generator, DMA channels, and external chip select lines.
The initial clock rate of the 80186 was 6 MHz, but due to more hardware available for the microcode to use, especially for address calculation, many individual instructions completed in fewer clock cycles than on an 8086 at the same clock frequency. For instance, the common register+immediate addressing mode was significantly faster than on the 8086, especially when a memory location was both (one of) the operand(s) and the destination. Multiply and divide also showed great improvement, being several times as fast as on the original 8086 and multi-bit shifts were done almost four times as quickly as in the 8086.
A few new instructions were introduced with the 80186 (referred to as the 8086-2 instruction set in some datasheets): enter/leave (replacing several instructions when handling stack frames), pusha/popa (push/pop all general registers), bound (check array index against bounds), and ins/outs (input/output of string). A useful immediate mode was added for the push, imul, and multi-bit shift instructions. These instructions were also included in the contemporary 80286 and in successor chips.
The (redesigned) CMOS version, 80C186, introduced DRAM refresh, a power-save mode, and a direct interface to the 80C187 floating point numeric coprocessor. Intel second sourced this microprocessor to Fujitsu Limited around 1985. Both packages for Intel 80186 version were available in 68-pin PLCC and PGA in sampling at third quarter of 1985. The available 12.5 MHz Intel 80186-12 version using the 1.5-micron HMOS-III process for USD $36 in quantities of 100. The available 12.5 MHz Intel 80C186 version using the CHMOS III-E technology using approximately 90 mA under normal load and only 32 mA under power-save mode. It was available in 68-pin PLCC, CPGA, or CLCC package. The military version of Intel M80C186 embedded controller was available in 10 and 12 MHz version. They met MIL-STD-883 Rev. C and MIL-STD-1553 bus application standards. The 12 MHz CHMOS version consumes approximately 100 mA. The available packages were 68-pin CPGA and CQFP. The 10 MHz M80C186 PGA version was available for USD $378 in 100-unit quantities. The available 80C186EB in fully static design for the application-specific standard product using the 1-micron CHMOS IV technology. They were available in 3- and 5-Volts version with 84-lead PLCC and 80-lead EIAJ QFP version. It was also available for USD $16.95 in 1,000 unit quantities.
Because the integrated hardware of the 80186, designed with embedded systems in mind, was incompatible with the hardware used in the original IBM PC, the 80286 was chosen to succeed the 8086, in the IBM PC/AT and other PC-compatible systems.
Several notable personal computers used the 80186:
In addition to the above examples of stand-alone implementations of the 80186 for personal computers, there were at least two examples of "add-in" accelerator card implementations: the BBC Master 512, Acorn's plug-in for the BBC Master range of computers containing an 80186-10 with 512 KB of RAM, and the Orchid Technology PC Turbo 186, released in 1985. It was intended for use with the original Intel 8088-based IBM PC (Model 5150).
The Intel 80186 is intended to be embedded in electronic devices that are not primarily computers. For example:
In May 2006, Intel announced that production of the 186 would cease at the end of September 2007. Pin- and instruction-compatible replacements might still be manufactured by various third party sources, and FPGA versions are publicly available. | [
{
"paragraph_id": 0,
"text": "The Intel 80186, also known as the iAPX 186, or just 186, is a microprocessor and microcontroller introduced in 1982. It was based on the Intel 8086 and, like it, had a 16-bit external data bus multiplexed with a 20-bit address bus. The 80188 variant, with an 8-bit external data bus was also available.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The 80186 series was generally intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of integrated circuits required, it included features such as clock generator, interrupt controller, timers, wait state generator, DMA channels, and external chip select lines.",
"title": "Description"
},
{
"paragraph_id": 2,
"text": "The initial clock rate of the 80186 was 6 MHz, but due to more hardware available for the microcode to use, especially for address calculation, many individual instructions completed in fewer clock cycles than on an 8086 at the same clock frequency. For instance, the common register+immediate addressing mode was significantly faster than on the 8086, especially when a memory location was both (one of) the operand(s) and the destination. Multiply and divide also showed great improvement, being several times as fast as on the original 8086 and multi-bit shifts were done almost four times as quickly as in the 8086.",
"title": "Description"
},
{
"paragraph_id": 3,
"text": "A few new instructions were introduced with the 80186 (referred to as the 8086-2 instruction set in some datasheets): enter/leave (replacing several instructions when handling stack frames), pusha/popa (push/pop all general registers), bound (check array index against bounds), and ins/outs (input/output of string). A useful immediate mode was added for the push, imul, and multi-bit shift instructions. These instructions were also included in the contemporary 80286 and in successor chips.",
"title": "Description"
},
{
"paragraph_id": 4,
"text": "",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "The (redesigned) CMOS version, 80C186, introduced DRAM refresh, a power-save mode, and a direct interface to the 80C187 floating point numeric coprocessor. Intel second sourced this microprocessor to Fujitsu Limited around 1985. Both packages for Intel 80186 version were available in 68-pin PLCC and PGA in sampling at third quarter of 1985. The available 12.5 MHz Intel 80186-12 version using the 1.5-micron HMOS-III process for USD $36 in quantities of 100. The available 12.5 MHz Intel 80C186 version using the CHMOS III-E technology using approximately 90 mA under normal load and only 32 mA under power-save mode. It was available in 68-pin PLCC, CPGA, or CLCC package. The military version of Intel M80C186 embedded controller was available in 10 and 12 MHz version. They met MIL-STD-883 Rev. C and MIL-STD-1553 bus application standards. The 12 MHz CHMOS version consumes approximately 100 mA. The available packages were 68-pin CPGA and CQFP. The 10 MHz M80C186 PGA version was available for USD $378 in 100-unit quantities. The available 80C186EB in fully static design for the application-specific standard product using the 1-micron CHMOS IV technology. They were available in 3- and 5-Volts version with 84-lead PLCC and 80-lead EIAJ QFP version. It was also available for USD $16.95 in 1,000 unit quantities.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "Because the integrated hardware of the 80186, designed with embedded systems in mind, was incompatible with the hardware used in the original IBM PC, the 80286 was chosen to succeed the 8086, in the IBM PC/AT and other PC-compatible systems.",
"title": "Uses"
},
{
"paragraph_id": 7,
"text": "Several notable personal computers used the 80186:",
"title": "Uses"
},
{
"paragraph_id": 8,
"text": "In addition to the above examples of stand-alone implementations of the 80186 for personal computers, there were at least two examples of \"add-in\" accelerator card implementations: the BBC Master 512, Acorn's plug-in for the BBC Master range of computers containing an 80186-10 with 512 KB of RAM, and the Orchid Technology PC Turbo 186, released in 1985. It was intended for use with the original Intel 8088-based IBM PC (Model 5150).",
"title": "Uses"
},
{
"paragraph_id": 9,
"text": "The Intel 80186 is intended to be embedded in electronic devices that are not primarily computers. For example:",
"title": "Uses"
},
{
"paragraph_id": 10,
"text": "In May 2006, Intel announced that production of the 186 would cease at the end of September 2007. Pin- and instruction-compatible replacements might still be manufactured by various third party sources, and FPGA versions are publicly available.",
"title": "Uses"
}
]
| The Intel 80186, also known as the iAPX 186, or just 186, is a microprocessor and microcontroller introduced in 1982. It was based on the Intel 8086 and, like it, had a 16-bit external data bus multiplexed with a 20-bit address bus. The 80188 variant, with an 8-bit external data bus was also available. | 2001-10-25T15:20:33Z | 2023-12-19T14:58:53Z | [
"Template:Infobox CPU",
"Template:Fact",
"Template:Notelist",
"Template:Cite AV media",
"Template:Intel processors",
"Template:Short description",
"Template:Merge from",
"Template:Cite book",
"Template:Intel controllers",
"Template:Authority control",
"Template:Citation needed",
"Template:Interlanguage link multi",
"Template:Reflist",
"Template:Cite magazine",
"Template:Cite web",
"Template:Microcontrollers",
"Template:Efn",
"Template:Anchor"
]
| https://en.wikipedia.org/wiki/Intel_80186 |
15,191 | Inquisition | The Inquisition was a group of institutions within the Catholic Church whose aim was to combat heresy, conducting trials of suspected heretics. Studies of the records have found that the overwhelming majority of sentences consisted of penances, but convictions of unrepentant heresy were handed over to the secular courts, which generally resulted in execution or life imprisonment. The Inquisition had its start in the 12th-century Kingdom of France, with the aim of combating religious deviation (e.g. apostasy or heresy), particularly among the Cathars and the Waldensians. The inquisitorial courts from this time until the mid-15th century are together known as the Medieval Inquisition. Other groups investigated during the Medieval Inquisition, which primarily took place in France and Italy, include the Spiritual Franciscans, the Hussites, and the Beguines. Beginning in the 1250s, inquisitors were generally chosen from members of the Dominican Order, replacing the earlier practice of using local clergy as judges.
During the Late Middle Ages and the early Renaissance, the scope of the Inquisition grew significantly in response to the Protestant Reformation and the Catholic Counter-Reformation. During this period, the Inquisition conducted by the Holy See was known as the Roman Inquisition. The Inquisition also expanded to other European countries, resulting in the Spanish Inquisition and the Portuguese Inquisition. The Spanish and Portuguese Inquisitions were instead focused particularly on the New Christians or Conversos, as the former Jews who converted to Christianity to avoid antisemitic regulations and persecution were called, the anusim (people who were forced to abandon Judaism against their will by violence and threats of expulsion) and on Muslim converts to Catholicism. The scale of the persecution of converted Muslims and converted Jews in Spain and Portugal was the result of suspicions that they had secretly reverted to their previous religions, although both religious minority groups were also more numerous on the Iberian Peninsula than in other parts of Europe, as well as the fear of possible rebellions and armed uprisings, as had occurred in previous times.
During this time, Spain and Portugal operated inquisitorial courts not only in Europe, but also throughout their empires in Africa, Asia, and the Americas. This resulted in the Goa Inquisition, the Peruvian Inquisition, and the Mexican Inquisition, among others.
With the exception of the Papal States, the institution of the Inquisition was abolished in the early 19th century, after the Napoleonic Wars in Europe and the Spanish American wars of independence in the Americas. The institution survived as part of the Roman Curia, but in 1908 it was renamed the Supreme Sacred Congregation of the Holy Office. In 1965, it became the Congregation for the Doctrine of the Faith. In 2022, this office was renamed the Dicastery for the Doctrine of the Faith.
The term "Inquisition" comes from the Medieval Latin word inquisitio, which described any court process based on Roman law, which had gradually come back into use during the Late Middle Ages. Today, the English term "Inquisition" can apply to any one of several institutions that worked against heretics or other offenders against the canon law of the Catholic Church. Although the term "Inquisition" is usually applied to ecclesiastical courts of the Catholic Church, it refers to a judicial process, not an organization. Inquisitors '...were called such because they applied a judicial technique known as inquisitio, which could be translated as "inquiry" or "inquest".' In this process, which was already widely used by secular rulers (Henry II used it extensively in England in the 12th century), an official inquirer called for information on a specific subject from anyone who felt he or she had something to offer."
The Inquisition, as a church-court, had no jurisdiction over Muslims and Jews as such. Generally, the Inquisition was concerned only with the heretical behaviour of Catholic adherents or converts.
The overwhelming majority of sentences seem to have consisted of penances like wearing a cross sewn on one's clothes or going on pilgrimage. When a suspect was convicted of unrepentant heresy, canon law required the inquisitorial tribunal to hand the person over to secular authorities for final sentencing. A secular magistrate, the "secular arm", would then determine the penalty based on local law. Those local laws included proscriptions against certain religious crimes, and the punishments included death by burning, although the penalty was more usually banishment or imprisonment for life, which was generally commuted after a few years. Thus the inquisitors generally knew the fate which expected anyone so remanded.
The 1578 edition of the Directorium Inquisitorum (a standard Inquisitorial manual) spelled out the purpose of inquisitorial penalties: ... quoniam punitio non refertur primo & per se in correctionem & bonum eius qui punitur, sed in bonum publicum ut alij terreantur, & a malis committendis avocentur (translation: "... for punishment does not take place primarily and per se for the correction and good of the person punished, but for the public good in order that others may become terrified and weaned away from the evils they would commit").
Before the 12th century, the Catholic Church suppressed what they believed to be heresy, usually through a system of ecclesiastical proscription or imprisonment, but without using torture, and seldom resorting to executions. Such punishments were opposed by a number of clergymen and theologians, although some countries punished heresy with the death penalty. Pope Siricius, Ambrose of Milan, and Martin of Tours protested against the execution of Priscillian, largely as an undue interference in ecclesiastical discipline by a civil tribunal. Though widely viewed as a heretic, Priscillian was executed as a sorcerer. Ambrose refused to give any recognition to Ithacius of Ossonuba, "not wishing to have anything to do with bishops who had sent heretics to their death".
In the 12th century, to counter the spread of Catharism, prosecution of heretics became more frequent. The Church charged councils composed of bishops and archbishops with establishing inquisitions (the Episcopal Inquisition). The first Inquisition was temporarily established in Languedoc (south of France) in 1184. The murder of Pope Innocent's papal legate Pierre de Castelnau in 1208 sparked the Albigensian Crusade (1209–1229). The Inquisition was permanently established in 1229 (Council of Toulouse), run largely by the Dominicans in Rome and later at Carcassonne in Languedoc.
Historians use the term "Medieval Inquisition" to describe the various inquisitions that started around 1184, including the Episcopal Inquisition (1184–1230s) and later the Papal Inquisition (1230s). These inquisitions responded to large popular movements throughout Europe considered apostate or heretical to Christianity, in particular the Cathars in southern France and the Waldensians in both southern France and northern Italy. Other Inquisitions followed after these first inquisition movements. The legal basis for some inquisitorial activity came from Pope Innocent IV's papal bull Ad extirpanda of 1252, which explicitly authorized (and defined the appropriate circumstances for) the use of torture by the Inquisition for eliciting confessions from heretics. However, Nicholas Eymerich, the inquisitor who wrote the "Directorium Inquisitorum", stated: 'Quaestiones sunt fallaces et ineficaces' ("interrogations via torture are misleading and futile"). By 1256 inquisitors were given absolution if they used instruments of torture.
In the 13th century, Pope Gregory IX (reigned 1227–1241) assigned the duty of carrying out inquisitions to the Dominican Order and Franciscan Order. By the end of the Middle Ages, England and Castile were the only large western nations without a papal inquisition. Most inquisitors were friars who taught theology and/or law in the universities. They used inquisitorial procedures, a common legal practice adapted from the earlier Ancient Roman court procedures. They judged heresy along with bishops and groups of "assessors" (clergy serving in a role that was roughly analogous to a jury or legal advisers), using the local authorities to establish a tribunal and to prosecute heretics. After 1200, a Grand Inquisitor headed each Inquisition. Grand Inquisitions persisted until the mid 19th century.
Only fragmentary data is available for the period before the Roman Inquisition of 1542. In 1276, some 170 Cathars were captured in Sirmione, who were then imprisoned in Verona, and there, after a two-year trial, on February 13 from 1278, more than a hundred of them were burned. In Orvieto, at the end of 1268/1269, 85 heretics were sentenced, none of whom were executed, but in 18 cases the sentence concerned people who had already died. In Tuscany, the inquisitor Ruggiero burned at least 11 people in about a year (1244/1245). Excluding the executions of the heretics at Sirmione in 1278, 36 Inquisition executions are documented in the March of Treviso between 1260 and 1308. Ten people were executed in Bologna between 1291 and 1310. In Piedmont, 22 heretics (mainly Waldensians) were burned in the years 1312-1395 out of 213 convicted. 22 Waldensians were burned in Cuneo around 1440 and another five in the Marquisate of Saluzzo in 1510. There are also fragmentary records of a good number of executions of people suspected of witchcraft in northern Italy in the 15th and early 16th centuries. Wolfgang Behringer estimates that there could have been as many as two thousand executions. This large number of witches executed was probably because some inquisitors took the view that the crime of witchcraft was exceptional, which meant that the usual rules for heresy trials did not apply to its perpetrators. Many alleged witches were executed even though they were first tried and pleaded guilty, which under normal rules would have meant only canonical sanctions, not death sentences. The episcopal inquisition was also active in suppressing alleged witches: in 1518, judges delegated by the Bishop of Brescia, Paolo Zane, sent some 70 witches from Val Camonica to the stake.
France has the best preserved archives of the medieval inquisition (13th-14th centuries), although they are still very incomplete. The activity of the inquisition in this country was very diverse, both in terms of time and territory. In the first period (1233 to c. 1330), the courts of Languedoc (Toulouse, Carcassonne) are the most active. After 1330 the center of the persecution of heretics shifted to the Alpine regions, while in Languedoc they ceased almost entirely. In northern France, the activity of the Inquisition was irregular throughout this period and, except for the first few years, it was not very intense.
France's first Dominican inquisitor, Robert le Bougre, working in the years 1233-1244, earned a particularly grim reputation. In 1236, Robert burned about 50 people in the area of Champagne and Flanders, and on May 13, 1239, in Montwimer, he burned 183 Cathars. Following Robert's removal from office, Inquisition activity in northern France remained very low. One of the largest trials in the area took place in 1459-1460 at Arras; 34 people were then accused of witchcraft and satanism, 12 of them were burned at the stake.
The main center of the medieval inquisition was undoubtedly the Languedoc. The first inquisitors were appointed there in 1233, but due to strong resistance from local communities in the early years, most sentences concerned dead heretics, whose bodies were exhumed and burned. Actual executions occurred sporadically and, until the fall of the fortress of Montsegur (1244), probably accounted for no more than 1% of all sentences. In addition to the cremation of the remains of the dead, a large percentage were also sentences in absentia and penances imposed on heretics who voluntarily confessed their faults (for example, in the years 1241-1242 the inquisitor Pierre Ceila reconciled 724 heretics with the Church). Inquisitor Ferrier of Catalonia, investigating Montauban between 1242 and 1244, questioned about 800 people, of whom he sentenced 6 to death and 20 to prison. Between 1243 and 1245, Bernard de Caux handed down 25 sentences of imprisonment and confiscation of property in Agen and Cahors. After the fall of Montsegur and the seizure of power in Toulouse by Count Alfonso de Poitiers, the percentage of death sentences increased to around 7% and remained at this level until the end of the Languedoc Inquisition around from 1330. Between 1245 and 1246, the inquisitor Bernard de Caux carried out a large-scale investigation in the area of Lauragais and Lavaur. He covered 39 villages, and probably all the adult inhabitants (5,471 people) were questioned, of whom 207 were found guilty of heresy. Of these 207, no one was sentenced to death, 23 were sentenced to prison and 184 to penance. Between 1246 and 1248, the inquisitors Bernard de Caux and Jean de Saint-Pierre handed down 192 sentences in Toulouse, of which 43 were sentences in absentia and 149 were prison sentences. In Pamiers in 1246/1247 there were 7 prison sentences [201] and in Limoux in the county of Foix 156 people were sentenced to carry crosses. Between 1249 and 1257, in Toulouse, the Inquisition handed down 306 sentences, without counting the penitential sentences imposed during "times of grace". 21 people were sentenced to death, 239 to prison, in addition, 30 people were sentenced in absentia and 11 posthumously; In another five cases the type of sanction is unknown, but since they all involve repeat offenders, only prison or burning is at stake. Between 1237 and 1279, at least 507 convictions were passed in Toulouse (most in absentia or posthumously) resulting in the confiscation of property; in Albi between 1240 and 1252 there were 60 sentences of this type.
The activities of Bernard Gui, inquisitor of Toulouse from 1307 to 1323, are better documented, as a complete record of his trials has been preserved. During the entire period of his inquisitorial activity, he handed down 633 sentences against 602 people (31 repeat offenders), including:
In addition, Bernard Gui issued 274 more sentences involving the mitigation of sentences already served to convicted heretics; in 139 cases he exchanged prison for carrying crosses, and in 135 cases, carrying crosses for pilgrimage. To the full statistics, there are 22 orders to demolish houses used by heretics as meeting places and one condemnation and burning of Jewish writings (including commentaries on the Torah).
The episcopal inquisition was also active in Languedoc. In the years 1232–1234, the Bishop of Toulouse, Raymond, sentenced several dozen Cathars to death. In turn, Bishop Jacques Fournier of Pamiers in the years 1318-1325 conducted an investigation against 89 people, of whom 64 were found guilty and 5 were sentenced to death.
After 1330, the center of activity of the French Inquisition moved east, to the Alpine regions, where there were numerous Waldensian communities. The repression against them was not continuous and was very ineffective. Data on sentences issued by inquisitors are fragmentary. In 1348, 12 Waldensians were burned in Embrun, and in 1353/1354 as many as 168 received penances. In general, however, few Waldensians fell into the hands of the Inquisition, for they took refuge in hard-to-reach mountainous regions, where they formed close-knit communities. Inquisitors operating in this region, in order to be able to conduct trials, often had to resort to the armed assistance of local secular authorities (e.g. military expeditions in 1338–1339 and 1366). In the years 1375–1393 (with some breaks), the Dauphiné was the scene of the activities of the inquisitor Francois Borel, who gained an extremely gloomy reputation among the locals. It is known that on July 1, 1380, he pronounced death sentences in absentia against 169 people, including 108 from the Valpute valley, 32 from Argentiere and 29 from Freyssiniere. It is not known how many of them were actually carried out, only six people captured in 1382 are confirmed to be executed.
In the 15th and 16th centuries, major trials took place only sporadically, e.g. against the Waldensians in Delphinate in 1430–1432 (no numerical data) and 1532–1533 (7 executed out of about 150 tried) or the aforementioned trial in Arras 1459–1460 . In the 16th century, the jurisdiction of the Inquisition in the kingdom of France was effectively limited to clergymen, while local parliaments took over the jurisdiction of the laity. Between 1500 and 1560, 62 people were burned for heresy in the Languedoc, all of whom were convicted by the Parliament of Toulouse.
Between 1657 and 1659, twenty-two alleged witches were burned on the orders of the inquisitor Pierre Symard in the province of Franche-Comte, then part of the Empire.
The inquisitorial tribunal in papal Avignon, established in 1541, passed 855 death sentences, almost all of them (818) in the years 1566–1574, but the vast majority of them were pronounced in absentia.
The Rhineland and Thuringia in the years 1231-1233 were the field of activity of the notorious inquisitor Konrad of Marburg. Unfortunately, the documentation of his trials has not been preserved, making it impossible to determine the number of his victims. The chronicles only mention "many" heretics that he burned. The only concrete information is about the burning of four people in Erfurt in May 1232.
After the murder of Konrad of Marburg, burning at the stake in Germany was virtually unknown for the next 80 years. It was not until the early fourteenth century that stronger measures were taken against heretics, largely at the initiative of bishops. In the years 1311-1315, numerous trials were held against the Waldensians in Austria, resulting in the burning of at least 39 people, according to incomplete records. In 1336, in Angermünde, in the diocese of Brandenburg, another 14 heretics were burned.
The number of those convicted by the papal inquisitors was smaller. Walter Kerlinger burned 10 begards in Erfurt and Nordhausen in 1368-1369. In turn, Eylard Schöneveld burned a total of four people in various Baltic cities in 1402-1403.
In the last decade of the 14th century, episcopal inquisitors carried out large-scale operations against heretics in eastern Germany, Pomerania, Austria, and Hungary. In Pomerania, of 443 sentenced in the years 1392-1394 by the inquisitor Peter Zwicker, the provincial of the Celestinians, none went to the stake, because they all submitted to the Church. Bloodier were the trials of the Waldensians in Austria in 1397, where more than a hundred Waldensians were burned at the stake. However, it seems that in these trials the death sentences represented only a small percentage of all the sentences, because according to the account of one of the inquisitors involved in these repressions, the number of heretics reconciled with the Church from Thuringia to Hungary amounted to about 2,000.
In 1414, the inquisitor Heinrich von Schöneveld arrested 84 flagellants in Sangerhausen, of whom he burned 3 leaders, and imposed penitential sentences on the rest. However, since this sect was associated with the peasant revolts in Thuringia from 1412, after the departure of the inquisitor, the local authorities organized a mass hunt for flagellants and, regardless of their previous verdicts, sent at least 168 to the stake (possibly up to 300) people. Inquisitor Friedrich Müller (d. 1460) sentenced to death 12 of the 13 heretics he had tried in 1446 at Nordhausen. In 1453 the same inquisitor burned 2 heretics in Göttingen.
Inquisitor Heinrich Kramer, author of the Malleus Maleficarum, in his own words, sentenced 48 people to the stake in five years (1481-1486). Jacob Hoogstraten, inquisitor of Cologne from 1508 to 1527, sentenced four people to be burned at the stake.
Very little is known about the activities of the inquisition in Hungary and the countries under its influence (Bosnia, Croatia), as there are few sources about this activity. Numerous conversions and executions of Bosnian Cathars are known to have taken place around 1239/40, and in 1268 the Dominican inquisitor Andrew reconciled many heretics with the Church in the town of Skradin, but precise figures are unknown. The border areas with Bohemia and Austria were under major inquisitorial action against the Waldensians in the early 15th century. In addition, in the years 1436-1440 in the Kingdom of Hungary, the Franciscan Jacobo de la Marcha acted as an inquisitor... his mission was mixed, preaching and inquisitorial. The correspondence preserved between James, his collaborators, the Hungarian bishops and Pope Eugene IV shows that he reconciled up to 25,000 people with the Church. This correspondence also shows that he punished recalcitrant heretics with death, and in 1437 numerous executions were carried out in the diocese of Sirmium, although the number of those executed is also unknown.
In Bohemia and Poland, the inquisition was established permanently in 1318, although anti-heretical repressions were carried out as early as 1315 in the episcopal inquisition, when more than 50 Waldensians were burned in various Silesian cities. The fragmentary surviving protocols of the investigations carried out by the Prague inquisitor Gallus de Neuhaus in the years 1335 to around 1353 mention 14 heretics burned out of almost 300 interrogated, but it is estimated that the actual number executed could have been even more than 200. , and the entire process was covered to varying degrees by some 4,400 people.
In the lands belonging to the Kingdom of Poland little is known of the activities of the Inquisition until the appearance of the Hussite heresy in the 15th century. Polish courts of the inquisition in the fight against this heresy issued at least 8 death sentences for some 200 trials carried out.
There are 558 court cases finished with conviction researched in Poland from XV to XVIII centuries.
With the sharpening of debate and of conflict between the Protestant Reformation and the Catholic Counter-Reformation, Protestant societies came to see/use the Inquisition as a terrifying "Other", while staunch Catholics regarded the Holy Office as a necessary bulwark against the spread of reprehensible heresies.
While belief in witchcraft, and persecutions directed at or excused by it, were widespread in pre-Christian Europe, and reflected in Germanic law, the influence of the Church in the early medieval era resulted in the revocation of these laws in many places, bringing an end to traditional pagan witch hunts. Throughout the medieval era, mainstream Christian teaching had denied the existence of witches and witchcraft, condemning it as pagan superstition. However, Christian influence on popular beliefs in witches and maleficium (harm committed by magic) failed to entirely eradicate folk belief in witches.
The fierce denunciation and persecution of supposed sorceresses that characterized the cruel witchhunts of a later age were not generally found in the first thirteen hundred years of the Christian era. The medieval Church distinguished between "white" and "black" magic. Local folk practice often mixed chants, incantations, and prayers to the appropriate patron saint to ward off storms, to protect cattle, or ensure a good harvest. Bonfires on Midsummer's Eve were intended to deflect natural catastrophes or the influence of fairies, ghosts, and witches. Plants, often harvested under particular conditions, were deemed effective in healing.
Black magic was that which was used for a malevolent purpose. This was generally dealt with through confession, repentance, and charitable work assigned as penance. Early Irish canons treated sorcery as a crime to be visited with excommunication until adequate penance had been performed. In 1258, Pope Alexander IV ruled that inquisitors should limit their involvement to those cases in which there was some clear presumption of heretical belief.
The prosecution of witchcraft generally became more prominent in the late medieval and Renaissance era, perhaps driven partly by the upheavals of the era – the Black Death, the Hundred Years War, and a gradual cooling of the climate that modern scientists call the Little Ice Age (between about the 15th and 19th centuries). Witches were sometimes blamed. Since the years of most intense witch-hunting largely coincide with the age of the Reformation, some historians point to the influence of the Reformation on the European witch-hunt.
Dominican priest Heinrich Kramer was assistant to the Archbishop of Salzburg. In 1484 Kramer requested that Pope Innocent VIII clarify his authority to prosecute witchcraft in Germany, where he had been refused assistance by the local ecclesiastical authorities. They maintained that Kramer could not legally function in their areas.
The papal bull Summis desiderantes affectibus sought to remedy this jurisdictional dispute by specifically identifying the dioceses of Mainz, Köln, Trier, Salzburg, and Bremen. Some scholars view the bull as "clearly political". The bull failed to ensure that Kramer obtained the support he had hoped for. In fact he was subsequently expelled from the city of Innsbruck by the local bishop, George Golzer, who ordered Kramer to stop making false accusations. Golzer described Kramer as senile in letters written shortly after the incident. This rebuke led Kramer to write a justification of his views on witchcraft in his 1486 book Malleus Maleficarum ("Hammer against witches"). In the book, Kramer stated his view that witchcraft was to blame for bad weather. The book is also noted for its animus against women. Despite Kramer's claim that the book gained acceptance from the clergy at the University of Cologne, it was in fact condemned by the clergy at Cologne for advocating views that violated Catholic doctrine and standard inquisitorial procedure. In 1538 the Spanish Inquisition cautioned its members not to believe everything the Malleus said.
Portugal and Spain in the late Middle Ages consisted largely of multicultural territories of Muslim and Jewish influence, reconquered from Islamic control, and the new Christian authorities could not assume that all their subjects would suddenly become and remain orthodox Catholics. So the Inquisition in Iberia, in the lands of the Reconquista counties and kingdoms like León, Castile, and Aragon, had a special socio-political basis as well as more fundamental religious motives.
In some parts of Spain towards the end of the 14th century, there was a wave of violent anti-Judaism, encouraged by the preaching of Ferrand Martínez, Archdeacon of Écija. In the pogroms of June 1391 in Seville, hundreds of Jews were killed, and the synagogue was completely destroyed. The number of people killed was also high in other cities, such as Córdoba, Valencia, and Barcelona.
One of the consequences of these pogroms was the mass conversion of thousands of surviving Jews. Forced baptism was contrary to the law of the Catholic Church, and theoretically anybody who had been forcibly baptized could legally return to Judaism. However, this was very narrowly interpreted. Legal definitions of the time theoretically acknowledged that a forced baptism was not a valid sacrament, but confined this to cases where it was literally administered by physical force. A person who had consented to baptism under threat of death or serious injury was still regarded as a voluntary convert, and accordingly forbidden to revert to Judaism. After the public violence, many of the converted "felt it safer to remain in their new religion". Thus, after 1391, a new social group appeared and were referred to as conversos or New Christians.
King Ferdinand II of Aragon and Queen Isabella I of Castile established the Spanish Inquisition in 1478. In contrast to the previous inquisitions, it operated completely under royal Christian authority, though staffed by clergy and orders, and independently of the Holy See. It operated in Spain and in most Spanish colonies and territories, which included the Canary Islands, the Kingdom of Sicily, and all Spanish possessions in North, Central, and South America. It primarily focused upon forced converts from Islam (Moriscos, Conversos, and "secret Moors") and from Judaism (Conversos, Crypto-Jews, and Marranos)—both groups still resided in Spain after the end of the Islamic control of Spain—who came under suspicion of either continuing to adhere to their old religion or of having fallen back into it.
All Jews who had not converted were expelled from Spain in 1492, and all Muslims ordered to convert in different stages starting in 1501. Those who converted or simply remained after the relevant edict became nominally and legally Catholics, and thus subject to the Inquisition.
In 1569, King Philip II of Spain set up three tribunals in the Americas (each formally titled Tribunal del Santo Oficio de la Inquisición): one in Mexico, one in Cartagena de Indias (in modern-day Colombia), and onw in Peru. The Mexican office administered Mexico (central and southeastern Mexico), Nueva Galicia (northern and western Mexico), the Audiencias of Guatemala (Guatemala, Chiapas, El Salvador, Honduras, Nicaragua, Costa Rica), and the Spanish East Indies. The Peruvian Inquisition, based in Lima, administered all the Spanish territories in South America and Panama.
The Portuguese Inquisition formally started in Portugal in 1536 at the request of King João III. Manuel I had asked Pope Leo X for the installation of the Inquisition in 1515, but only after his death in 1521 did Pope Paul III acquiesce. At its head stood a Grande Inquisidor, or General Inquisitor, named by the Pope but selected by the Crown, and always from within the royal family. The Portuguese Inquisition principally focused upon the Sephardi Jews, whom the state forced to convert to Christianity. Spain had expelled its Sephardi population in 1492; many of these Spanish Jews left Spain for Portugal but eventually were subject to inquisition there as well.
The Portuguese Inquisition held its first auto-da-fé in 1540. The Portuguese inquisitors mostly focused upon the Jewish New Christians (i.e. conversos or marranos). The Portuguese Inquisition expanded its scope of operations from Portugal to its colonial possessions, including Brazil, Cape Verde, and Goa. In the colonies, it continued as a religious court, investigating and trying cases of breaches of the tenets of orthodox Catholicism until 1821. King João III (reigned 1521–57) extended the activity of the courts to cover censorship, divination, witchcraft, and bigamy. Originally oriented for a religious action, the Inquisition exerted an influence over almost every aspect of Portuguese society: political, cultural, and social.
According to Henry Charles Lea, between 1540 and 1794, tribunals in Lisbon, Porto, Coimbra, and Évora resulted in the burning of 1,175 persons, the burning of another 633 in effigy, and the penancing of 29,590. But documentation of 15 out of 689 autos-da-fé has disappeared, so these numbers may slightly understate the activity.
The Goa Inquisition began in 1560 at the order of John III of Portugal. It had originally been requested in a letter in the 1540s by Jesuit priest Francis Xavier, because of the New Christians who had arrived in Goa and then reverted to Judaism. The Goa Inquisition also focused upon Catholic converts from Hinduism or Islam who were thought to have returned to their original ways. In addition, this inquisition prosecuted non-converts who broke prohibitions against the public observance of Hindu or Muslim rites or interfered with Portuguese attempts to convert non-Christians to Catholicism. Aleixo Dias Falcão and Francisco Marques set it up in the palace of the Sabaio Adil Khan.
The inquisition was active in colonial Brazil. The religious mystic and formerly enslaved prostitute, Rosa Egipcíaca was arrested, interrogated and imprisoned, both in the colony and in Lisbon. Egipcíaca was the first black woman in Brazil to write a book - this work detailed her visions and was entitled Sagrada Teologia do Amor Divino das Almas Peregrinas.
With the Protestant Reformation, Catholic authorities became much more ready to suspect heresy in any new ideas, including those of Renaissance humanism, previously strongly supported by many at the top of the Church hierarchy. The extirpation of heretics became a much broader and more complex enterprise, complicated by the politics of territorial Protestant powers, especially in northern Europe. The Catholic Church could no longer exercise direct influence in the politics and justice-systems of lands that officially adopted Protestantism. Thus war (the French Wars of Religion, the Thirty Years' War), massacre (the St. Bartholomew's Day massacre) and the missional and propaganda work (by the Sacra congregatio de propaganda fide) of the Counter-Reformation came to play larger roles in these circumstances, and the Roman law type of a "judicial" approach to heresy represented by the Inquisition became less important overall. In 1542 Pope Paul III established the Congregation of the Holy Office of the Inquisition as a permanent congregation staffed with cardinals and other officials. It had the tasks of maintaining and defending the integrity of the faith and of examining and proscribing errors and false doctrines; it thus became the supervisory body of local Inquisitions. A famous case tried by the Roman Inquisition was that of Galileo Galilei in 1633.
The penances and sentences for those who confessed or were found guilty were pronounced together in a public ceremony at the end of all the processes. This was the sermo generalis or auto-da-fé. Penances (not matters for the civil authorities) might consist of a pilgrimage, a public scourging, a fine, or the wearing of a cross. The wearing of two tongues of red or other brightly colored cloth, sewn onto an outer garment in an "X" pattern, marked those who were under investigation. The penalties in serious cases were confiscation of property by the Inquisition or imprisonment. This led to the possibility of false charges to enable confiscation being made against those over a certain income, particularly rich marranos. Following the French invasion of 1798, the new authorities sent 3,000 chests containing over 100,000 Inquisition documents to France from Rome.
By decree of Napoleon's government in 1797, the Inquisition in Venice was abolished in 1806.
In Portugal, in the wake of the Liberal Revolution of 1820, the "General Extraordinary and Constituent Courts of the Portuguese Nation" abolished the Portuguese inquisition in 1821.
The wars of independence of the former Spanish colonies in the Americas concluded with the abolition of the Inquisition in every quarter of Hispanic America between 1813 and 1825.
The last execution of the Inquisition was in Spain in 1826. This was the execution by garroting of the Catalan school teacher Gaietà Ripoll for purportedly teaching Deism in his school. In Spain the practices of the Inquisition were finally outlawed in 1834.
In Italy, the restoration of the Pope as the ruler of the Papal States in 1814 brought back the Inquisition to the Papal States. It remained active there until the late-19th century, notably in the well-publicised Mortara affair (1858–1870). In 1908 the name of the Congregation became "The Sacred Congregation of the Holy Office", which in 1965 further changed to "Congregation for the Doctrine of the Faith", as retained to the present day.
Defendants were commonly interrogated under torture and finally punished if found guilty, with their property being requisitioned in the process to defray legal costs and prison costs. They could also repent of their accusation and receive reconciliation with the Church. The execution of the tortures was attended by the inquisitor, the doctor, the secretary and the executioner, applying them (except in the case of women) on the completely naked prisoner. In the year 1252, the bull Ad extirpanda allowed torture, but always with a doctor involved to avoid endangering life, and limited its use to three methods (not one of which was bloody):
According to the Catholic Church, the method of torture (which was socially accepted in the context of the time) was adopted only in exceptional cases. The inquisitorial procedure was meticulously regulated in interrogation practices.
Not all civilly accepted methods of torture were endorsed by the Catholic Church, and for a defendant to be sent to torture, he must be prosecuted for a crime considered serious, and the court must also have well-founded suspicions of his guilt. None of these were originated by the Holy Office; rather they were used by civil authorities.
Despite the use of torture, the inquisitorial procedure represents a breakthrough in the history of legislation. On the one hand, it definitely ruled out the use of the ordeal, a Germanic tradition long condemned by the hierarchy, without taking disciplinary measures against it, as a means of obtaining evidence, replacing it with the principle of testimonial evidence, which is still in force in current laws. On the other hand, the principle of the State as prosecutor or accusing party is restored. Until that time, it was the victim who had to prove the guilt of his aggressor, even in the most serious criminal proceedings, this was often very difficult when the victim was weak and the criminal powerful. But in the Inquisition, the victim is no more than a simple witness, as happens in countries where an inquisitive system is applied. It was the ecclesiastical authority who now had the burden of proof.. The summary of the Directorium Inquisitorum, by Nicolás Aymerich, made by Marchena, notes a comment by the Aragonese inquisitor: Quaestiones sunt fallaces et inefficaces ("The interrogations are misleading and useless").
Despite what is popularly believed, the cases in which torture was used during the inquisitorial processes were rare, since it was considered to be ineffective in obtaining evidence. In addition, in the vast majority of cases, display of torture instruments mainly had the purpose of intimidation of the accused, their use being more the exception than the norm.
In the words of historian Helen Mary Carrel: "the common view of the medieval justice system as cruel and based on torture and execution is often unfair and inaccurate." As the historian Nigel Townson wrote: "The sinister torture chambers equipped with cogwheels, bone crushing contraptions, shackles, and other terrifying mechanisms only existed in the imagination of their detractors."
Some instruments of torture awarded to the Inquisition, actually originated in the Protestant churches, or were and/or used by civil authorities, such as those presented in the Constitutio Criminalis Theresiana (of the Habsburg Monarchy) or the Ordonnance de Blois (of the Parlement of Paris). These were modern, not medieval, inventions that were not related to the Inquisition.
Many were designed by late 18th and early 19th century pranksters, entertainers, and con artists who wanted to profit from people's morbid interest in the Dark Age myth by charging them to witness such instruments in Victorian-era circuses.
However, several torture instruments are accurately described in Foxe's Book of Martyrs, including but not limited to the dry pan.
Some of the instruments that the Inquisition never used, but that are erroneously registered in various inquisition museums:
Beginning in the 19th century, historians have gradually compiled statistics drawn from the surviving court records, from which estimates have been calculated by adjusting the recorded number of convictions by the average rate of document loss for each time period. Gustav Henningsen and Jaime Contreras studied the records of the Spanish Inquisition, which list 44,674 cases of which 826 resulted in executions in person and 778 in effigy (i.e. a straw dummy was burned in place of the person). William Monter estimated there were 1000 executions between 1530–1630 and 250 between 1630 and 1730. Jean-Pierre Dedieu studied the records of Toledo's tribunal, which put 12,000 people on trial. For the period prior to 1530, Henry Kamen estimated there were about 2,000 executions in all of Spain's tribunals. Italian Renaissance history professor and Inquisition expert Carlo Ginzburg had his doubts about using statistics to reach a judgment about the period. "In many cases, we don't have the evidence, the evidence has been lost," said Ginzburg. | [
{
"paragraph_id": 0,
"text": "The Inquisition was a group of institutions within the Catholic Church whose aim was to combat heresy, conducting trials of suspected heretics. Studies of the records have found that the overwhelming majority of sentences consisted of penances, but convictions of unrepentant heresy were handed over to the secular courts, which generally resulted in execution or life imprisonment. The Inquisition had its start in the 12th-century Kingdom of France, with the aim of combating religious deviation (e.g. apostasy or heresy), particularly among the Cathars and the Waldensians. The inquisitorial courts from this time until the mid-15th century are together known as the Medieval Inquisition. Other groups investigated during the Medieval Inquisition, which primarily took place in France and Italy, include the Spiritual Franciscans, the Hussites, and the Beguines. Beginning in the 1250s, inquisitors were generally chosen from members of the Dominican Order, replacing the earlier practice of using local clergy as judges.",
"title": ""
},
{
"paragraph_id": 1,
"text": "During the Late Middle Ages and the early Renaissance, the scope of the Inquisition grew significantly in response to the Protestant Reformation and the Catholic Counter-Reformation. During this period, the Inquisition conducted by the Holy See was known as the Roman Inquisition. The Inquisition also expanded to other European countries, resulting in the Spanish Inquisition and the Portuguese Inquisition. The Spanish and Portuguese Inquisitions were instead focused particularly on the New Christians or Conversos, as the former Jews who converted to Christianity to avoid antisemitic regulations and persecution were called, the anusim (people who were forced to abandon Judaism against their will by violence and threats of expulsion) and on Muslim converts to Catholicism. The scale of the persecution of converted Muslims and converted Jews in Spain and Portugal was the result of suspicions that they had secretly reverted to their previous religions, although both religious minority groups were also more numerous on the Iberian Peninsula than in other parts of Europe, as well as the fear of possible rebellions and armed uprisings, as had occurred in previous times.",
"title": ""
},
{
"paragraph_id": 2,
"text": "During this time, Spain and Portugal operated inquisitorial courts not only in Europe, but also throughout their empires in Africa, Asia, and the Americas. This resulted in the Goa Inquisition, the Peruvian Inquisition, and the Mexican Inquisition, among others.",
"title": ""
},
{
"paragraph_id": 3,
"text": "With the exception of the Papal States, the institution of the Inquisition was abolished in the early 19th century, after the Napoleonic Wars in Europe and the Spanish American wars of independence in the Americas. The institution survived as part of the Roman Curia, but in 1908 it was renamed the Supreme Sacred Congregation of the Holy Office. In 1965, it became the Congregation for the Doctrine of the Faith. In 2022, this office was renamed the Dicastery for the Doctrine of the Faith.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term \"Inquisition\" comes from the Medieval Latin word inquisitio, which described any court process based on Roman law, which had gradually come back into use during the Late Middle Ages. Today, the English term \"Inquisition\" can apply to any one of several institutions that worked against heretics or other offenders against the canon law of the Catholic Church. Although the term \"Inquisition\" is usually applied to ecclesiastical courts of the Catholic Church, it refers to a judicial process, not an organization. Inquisitors '...were called such because they applied a judicial technique known as inquisitio, which could be translated as \"inquiry\" or \"inquest\".' In this process, which was already widely used by secular rulers (Henry II used it extensively in England in the 12th century), an official inquirer called for information on a specific subject from anyone who felt he or she had something to offer.\"",
"title": "Definition and Goals"
},
{
"paragraph_id": 5,
"text": "The Inquisition, as a church-court, had no jurisdiction over Muslims and Jews as such. Generally, the Inquisition was concerned only with the heretical behaviour of Catholic adherents or converts.",
"title": "Definition and Goals"
},
{
"paragraph_id": 6,
"text": "The overwhelming majority of sentences seem to have consisted of penances like wearing a cross sewn on one's clothes or going on pilgrimage. When a suspect was convicted of unrepentant heresy, canon law required the inquisitorial tribunal to hand the person over to secular authorities for final sentencing. A secular magistrate, the \"secular arm\", would then determine the penalty based on local law. Those local laws included proscriptions against certain religious crimes, and the punishments included death by burning, although the penalty was more usually banishment or imprisonment for life, which was generally commuted after a few years. Thus the inquisitors generally knew the fate which expected anyone so remanded.",
"title": "Definition and Goals"
},
{
"paragraph_id": 7,
"text": "The 1578 edition of the Directorium Inquisitorum (a standard Inquisitorial manual) spelled out the purpose of inquisitorial penalties: ... quoniam punitio non refertur primo & per se in correctionem & bonum eius qui punitur, sed in bonum publicum ut alij terreantur, & a malis committendis avocentur (translation: \"... for punishment does not take place primarily and per se for the correction and good of the person punished, but for the public good in order that others may become terrified and weaned away from the evils they would commit\").",
"title": "Definition and Goals"
},
{
"paragraph_id": 8,
"text": "Before the 12th century, the Catholic Church suppressed what they believed to be heresy, usually through a system of ecclesiastical proscription or imprisonment, but without using torture, and seldom resorting to executions. Such punishments were opposed by a number of clergymen and theologians, although some countries punished heresy with the death penalty. Pope Siricius, Ambrose of Milan, and Martin of Tours protested against the execution of Priscillian, largely as an undue interference in ecclesiastical discipline by a civil tribunal. Though widely viewed as a heretic, Priscillian was executed as a sorcerer. Ambrose refused to give any recognition to Ithacius of Ossonuba, \"not wishing to have anything to do with bishops who had sent heretics to their death\".",
"title": "Origin"
},
{
"paragraph_id": 9,
"text": "In the 12th century, to counter the spread of Catharism, prosecution of heretics became more frequent. The Church charged councils composed of bishops and archbishops with establishing inquisitions (the Episcopal Inquisition). The first Inquisition was temporarily established in Languedoc (south of France) in 1184. The murder of Pope Innocent's papal legate Pierre de Castelnau in 1208 sparked the Albigensian Crusade (1209–1229). The Inquisition was permanently established in 1229 (Council of Toulouse), run largely by the Dominicans in Rome and later at Carcassonne in Languedoc.",
"title": "Origin"
},
{
"paragraph_id": 10,
"text": "Historians use the term \"Medieval Inquisition\" to describe the various inquisitions that started around 1184, including the Episcopal Inquisition (1184–1230s) and later the Papal Inquisition (1230s). These inquisitions responded to large popular movements throughout Europe considered apostate or heretical to Christianity, in particular the Cathars in southern France and the Waldensians in both southern France and northern Italy. Other Inquisitions followed after these first inquisition movements. The legal basis for some inquisitorial activity came from Pope Innocent IV's papal bull Ad extirpanda of 1252, which explicitly authorized (and defined the appropriate circumstances for) the use of torture by the Inquisition for eliciting confessions from heretics. However, Nicholas Eymerich, the inquisitor who wrote the \"Directorium Inquisitorum\", stated: 'Quaestiones sunt fallaces et ineficaces' (\"interrogations via torture are misleading and futile\"). By 1256 inquisitors were given absolution if they used instruments of torture.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 11,
"text": "In the 13th century, Pope Gregory IX (reigned 1227–1241) assigned the duty of carrying out inquisitions to the Dominican Order and Franciscan Order. By the end of the Middle Ages, England and Castile were the only large western nations without a papal inquisition. Most inquisitors were friars who taught theology and/or law in the universities. They used inquisitorial procedures, a common legal practice adapted from the earlier Ancient Roman court procedures. They judged heresy along with bishops and groups of \"assessors\" (clergy serving in a role that was roughly analogous to a jury or legal advisers), using the local authorities to establish a tribunal and to prosecute heretics. After 1200, a Grand Inquisitor headed each Inquisition. Grand Inquisitions persisted until the mid 19th century.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 12,
"text": "Only fragmentary data is available for the period before the Roman Inquisition of 1542. In 1276, some 170 Cathars were captured in Sirmione, who were then imprisoned in Verona, and there, after a two-year trial, on February 13 from 1278, more than a hundred of them were burned. In Orvieto, at the end of 1268/1269, 85 heretics were sentenced, none of whom were executed, but in 18 cases the sentence concerned people who had already died. In Tuscany, the inquisitor Ruggiero burned at least 11 people in about a year (1244/1245). Excluding the executions of the heretics at Sirmione in 1278, 36 Inquisition executions are documented in the March of Treviso between 1260 and 1308. Ten people were executed in Bologna between 1291 and 1310. In Piedmont, 22 heretics (mainly Waldensians) were burned in the years 1312-1395 out of 213 convicted. 22 Waldensians were burned in Cuneo around 1440 and another five in the Marquisate of Saluzzo in 1510. There are also fragmentary records of a good number of executions of people suspected of witchcraft in northern Italy in the 15th and early 16th centuries. Wolfgang Behringer estimates that there could have been as many as two thousand executions. This large number of witches executed was probably because some inquisitors took the view that the crime of witchcraft was exceptional, which meant that the usual rules for heresy trials did not apply to its perpetrators. Many alleged witches were executed even though they were first tried and pleaded guilty, which under normal rules would have meant only canonical sanctions, not death sentences. The episcopal inquisition was also active in suppressing alleged witches: in 1518, judges delegated by the Bishop of Brescia, Paolo Zane, sent some 70 witches from Val Camonica to the stake.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 13,
"text": "France has the best preserved archives of the medieval inquisition (13th-14th centuries), although they are still very incomplete. The activity of the inquisition in this country was very diverse, both in terms of time and territory. In the first period (1233 to c. 1330), the courts of Languedoc (Toulouse, Carcassonne) are the most active. After 1330 the center of the persecution of heretics shifted to the Alpine regions, while in Languedoc they ceased almost entirely. In northern France, the activity of the Inquisition was irregular throughout this period and, except for the first few years, it was not very intense.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 14,
"text": "France's first Dominican inquisitor, Robert le Bougre, working in the years 1233-1244, earned a particularly grim reputation. In 1236, Robert burned about 50 people in the area of Champagne and Flanders, and on May 13, 1239, in Montwimer, he burned 183 Cathars. Following Robert's removal from office, Inquisition activity in northern France remained very low. One of the largest trials in the area took place in 1459-1460 at Arras; 34 people were then accused of witchcraft and satanism, 12 of them were burned at the stake.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 15,
"text": "The main center of the medieval inquisition was undoubtedly the Languedoc. The first inquisitors were appointed there in 1233, but due to strong resistance from local communities in the early years, most sentences concerned dead heretics, whose bodies were exhumed and burned. Actual executions occurred sporadically and, until the fall of the fortress of Montsegur (1244), probably accounted for no more than 1% of all sentences. In addition to the cremation of the remains of the dead, a large percentage were also sentences in absentia and penances imposed on heretics who voluntarily confessed their faults (for example, in the years 1241-1242 the inquisitor Pierre Ceila reconciled 724 heretics with the Church). Inquisitor Ferrier of Catalonia, investigating Montauban between 1242 and 1244, questioned about 800 people, of whom he sentenced 6 to death and 20 to prison. Between 1243 and 1245, Bernard de Caux handed down 25 sentences of imprisonment and confiscation of property in Agen and Cahors. After the fall of Montsegur and the seizure of power in Toulouse by Count Alfonso de Poitiers, the percentage of death sentences increased to around 7% and remained at this level until the end of the Languedoc Inquisition around from 1330. Between 1245 and 1246, the inquisitor Bernard de Caux carried out a large-scale investigation in the area of Lauragais and Lavaur. He covered 39 villages, and probably all the adult inhabitants (5,471 people) were questioned, of whom 207 were found guilty of heresy. Of these 207, no one was sentenced to death, 23 were sentenced to prison and 184 to penance. Between 1246 and 1248, the inquisitors Bernard de Caux and Jean de Saint-Pierre handed down 192 sentences in Toulouse, of which 43 were sentences in absentia and 149 were prison sentences. In Pamiers in 1246/1247 there were 7 prison sentences [201] and in Limoux in the county of Foix 156 people were sentenced to carry crosses. Between 1249 and 1257, in Toulouse, the Inquisition handed down 306 sentences, without counting the penitential sentences imposed during \"times of grace\". 21 people were sentenced to death, 239 to prison, in addition, 30 people were sentenced in absentia and 11 posthumously; In another five cases the type of sanction is unknown, but since they all involve repeat offenders, only prison or burning is at stake. Between 1237 and 1279, at least 507 convictions were passed in Toulouse (most in absentia or posthumously) resulting in the confiscation of property; in Albi between 1240 and 1252 there were 60 sentences of this type.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 16,
"text": "The activities of Bernard Gui, inquisitor of Toulouse from 1307 to 1323, are better documented, as a complete record of his trials has been preserved. During the entire period of his inquisitorial activity, he handed down 633 sentences against 602 people (31 repeat offenders), including:",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 17,
"text": "In addition, Bernard Gui issued 274 more sentences involving the mitigation of sentences already served to convicted heretics; in 139 cases he exchanged prison for carrying crosses, and in 135 cases, carrying crosses for pilgrimage. To the full statistics, there are 22 orders to demolish houses used by heretics as meeting places and one condemnation and burning of Jewish writings (including commentaries on the Torah).",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 18,
"text": "The episcopal inquisition was also active in Languedoc. In the years 1232–1234, the Bishop of Toulouse, Raymond, sentenced several dozen Cathars to death. In turn, Bishop Jacques Fournier of Pamiers in the years 1318-1325 conducted an investigation against 89 people, of whom 64 were found guilty and 5 were sentenced to death.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 19,
"text": "After 1330, the center of activity of the French Inquisition moved east, to the Alpine regions, where there were numerous Waldensian communities. The repression against them was not continuous and was very ineffective. Data on sentences issued by inquisitors are fragmentary. In 1348, 12 Waldensians were burned in Embrun, and in 1353/1354 as many as 168 received penances. In general, however, few Waldensians fell into the hands of the Inquisition, for they took refuge in hard-to-reach mountainous regions, where they formed close-knit communities. Inquisitors operating in this region, in order to be able to conduct trials, often had to resort to the armed assistance of local secular authorities (e.g. military expeditions in 1338–1339 and 1366). In the years 1375–1393 (with some breaks), the Dauphiné was the scene of the activities of the inquisitor Francois Borel, who gained an extremely gloomy reputation among the locals. It is known that on July 1, 1380, he pronounced death sentences in absentia against 169 people, including 108 from the Valpute valley, 32 from Argentiere and 29 from Freyssiniere. It is not known how many of them were actually carried out, only six people captured in 1382 are confirmed to be executed.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 20,
"text": "In the 15th and 16th centuries, major trials took place only sporadically, e.g. against the Waldensians in Delphinate in 1430–1432 (no numerical data) and 1532–1533 (7 executed out of about 150 tried) or the aforementioned trial in Arras 1459–1460 . In the 16th century, the jurisdiction of the Inquisition in the kingdom of France was effectively limited to clergymen, while local parliaments took over the jurisdiction of the laity. Between 1500 and 1560, 62 people were burned for heresy in the Languedoc, all of whom were convicted by the Parliament of Toulouse.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 21,
"text": "Between 1657 and 1659, twenty-two alleged witches were burned on the orders of the inquisitor Pierre Symard in the province of Franche-Comte, then part of the Empire.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 22,
"text": "The inquisitorial tribunal in papal Avignon, established in 1541, passed 855 death sentences, almost all of them (818) in the years 1566–1574, but the vast majority of them were pronounced in absentia.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 23,
"text": "The Rhineland and Thuringia in the years 1231-1233 were the field of activity of the notorious inquisitor Konrad of Marburg. Unfortunately, the documentation of his trials has not been preserved, making it impossible to determine the number of his victims. The chronicles only mention \"many\" heretics that he burned. The only concrete information is about the burning of four people in Erfurt in May 1232.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 24,
"text": "After the murder of Konrad of Marburg, burning at the stake in Germany was virtually unknown for the next 80 years. It was not until the early fourteenth century that stronger measures were taken against heretics, largely at the initiative of bishops. In the years 1311-1315, numerous trials were held against the Waldensians in Austria, resulting in the burning of at least 39 people, according to incomplete records. In 1336, in Angermünde, in the diocese of Brandenburg, another 14 heretics were burned.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 25,
"text": "The number of those convicted by the papal inquisitors was smaller. Walter Kerlinger burned 10 begards in Erfurt and Nordhausen in 1368-1369. In turn, Eylard Schöneveld burned a total of four people in various Baltic cities in 1402-1403.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 26,
"text": "In the last decade of the 14th century, episcopal inquisitors carried out large-scale operations against heretics in eastern Germany, Pomerania, Austria, and Hungary. In Pomerania, of 443 sentenced in the years 1392-1394 by the inquisitor Peter Zwicker, the provincial of the Celestinians, none went to the stake, because they all submitted to the Church. Bloodier were the trials of the Waldensians in Austria in 1397, where more than a hundred Waldensians were burned at the stake. However, it seems that in these trials the death sentences represented only a small percentage of all the sentences, because according to the account of one of the inquisitors involved in these repressions, the number of heretics reconciled with the Church from Thuringia to Hungary amounted to about 2,000.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 27,
"text": "In 1414, the inquisitor Heinrich von Schöneveld arrested 84 flagellants in Sangerhausen, of whom he burned 3 leaders, and imposed penitential sentences on the rest. However, since this sect was associated with the peasant revolts in Thuringia from 1412, after the departure of the inquisitor, the local authorities organized a mass hunt for flagellants and, regardless of their previous verdicts, sent at least 168 to the stake (possibly up to 300) people. Inquisitor Friedrich Müller (d. 1460) sentenced to death 12 of the 13 heretics he had tried in 1446 at Nordhausen. In 1453 the same inquisitor burned 2 heretics in Göttingen.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 28,
"text": "Inquisitor Heinrich Kramer, author of the Malleus Maleficarum, in his own words, sentenced 48 people to the stake in five years (1481-1486). Jacob Hoogstraten, inquisitor of Cologne from 1508 to 1527, sentenced four people to be burned at the stake.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 29,
"text": "Very little is known about the activities of the inquisition in Hungary and the countries under its influence (Bosnia, Croatia), as there are few sources about this activity. Numerous conversions and executions of Bosnian Cathars are known to have taken place around 1239/40, and in 1268 the Dominican inquisitor Andrew reconciled many heretics with the Church in the town of Skradin, but precise figures are unknown. The border areas with Bohemia and Austria were under major inquisitorial action against the Waldensians in the early 15th century. In addition, in the years 1436-1440 in the Kingdom of Hungary, the Franciscan Jacobo de la Marcha acted as an inquisitor... his mission was mixed, preaching and inquisitorial. The correspondence preserved between James, his collaborators, the Hungarian bishops and Pope Eugene IV shows that he reconciled up to 25,000 people with the Church. This correspondence also shows that he punished recalcitrant heretics with death, and in 1437 numerous executions were carried out in the diocese of Sirmium, although the number of those executed is also unknown.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 30,
"text": "In Bohemia and Poland, the inquisition was established permanently in 1318, although anti-heretical repressions were carried out as early as 1315 in the episcopal inquisition, when more than 50 Waldensians were burned in various Silesian cities. The fragmentary surviving protocols of the investigations carried out by the Prague inquisitor Gallus de Neuhaus in the years 1335 to around 1353 mention 14 heretics burned out of almost 300 interrogated, but it is estimated that the actual number executed could have been even more than 200. , and the entire process was covered to varying degrees by some 4,400 people.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 31,
"text": "In the lands belonging to the Kingdom of Poland little is known of the activities of the Inquisition until the appearance of the Hussite heresy in the 15th century. Polish courts of the inquisition in the fight against this heresy issued at least 8 death sentences for some 200 trials carried out.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 32,
"text": "There are 558 court cases finished with conviction researched in Poland from XV to XVIII centuries.",
"title": "Medieval Inquisition"
},
{
"paragraph_id": 33,
"text": "With the sharpening of debate and of conflict between the Protestant Reformation and the Catholic Counter-Reformation, Protestant societies came to see/use the Inquisition as a terrifying \"Other\", while staunch Catholics regarded the Holy Office as a necessary bulwark against the spread of reprehensible heresies.",
"title": "Early modern European history"
},
{
"paragraph_id": 34,
"text": "While belief in witchcraft, and persecutions directed at or excused by it, were widespread in pre-Christian Europe, and reflected in Germanic law, the influence of the Church in the early medieval era resulted in the revocation of these laws in many places, bringing an end to traditional pagan witch hunts. Throughout the medieval era, mainstream Christian teaching had denied the existence of witches and witchcraft, condemning it as pagan superstition. However, Christian influence on popular beliefs in witches and maleficium (harm committed by magic) failed to entirely eradicate folk belief in witches.",
"title": "Early modern European history"
},
{
"paragraph_id": 35,
"text": "The fierce denunciation and persecution of supposed sorceresses that characterized the cruel witchhunts of a later age were not generally found in the first thirteen hundred years of the Christian era. The medieval Church distinguished between \"white\" and \"black\" magic. Local folk practice often mixed chants, incantations, and prayers to the appropriate patron saint to ward off storms, to protect cattle, or ensure a good harvest. Bonfires on Midsummer's Eve were intended to deflect natural catastrophes or the influence of fairies, ghosts, and witches. Plants, often harvested under particular conditions, were deemed effective in healing.",
"title": "Early modern European history"
},
{
"paragraph_id": 36,
"text": "Black magic was that which was used for a malevolent purpose. This was generally dealt with through confession, repentance, and charitable work assigned as penance. Early Irish canons treated sorcery as a crime to be visited with excommunication until adequate penance had been performed. In 1258, Pope Alexander IV ruled that inquisitors should limit their involvement to those cases in which there was some clear presumption of heretical belief.",
"title": "Early modern European history"
},
{
"paragraph_id": 37,
"text": "The prosecution of witchcraft generally became more prominent in the late medieval and Renaissance era, perhaps driven partly by the upheavals of the era – the Black Death, the Hundred Years War, and a gradual cooling of the climate that modern scientists call the Little Ice Age (between about the 15th and 19th centuries). Witches were sometimes blamed. Since the years of most intense witch-hunting largely coincide with the age of the Reformation, some historians point to the influence of the Reformation on the European witch-hunt.",
"title": "Early modern European history"
},
{
"paragraph_id": 38,
"text": "Dominican priest Heinrich Kramer was assistant to the Archbishop of Salzburg. In 1484 Kramer requested that Pope Innocent VIII clarify his authority to prosecute witchcraft in Germany, where he had been refused assistance by the local ecclesiastical authorities. They maintained that Kramer could not legally function in their areas.",
"title": "Early modern European history"
},
{
"paragraph_id": 39,
"text": "The papal bull Summis desiderantes affectibus sought to remedy this jurisdictional dispute by specifically identifying the dioceses of Mainz, Köln, Trier, Salzburg, and Bremen. Some scholars view the bull as \"clearly political\". The bull failed to ensure that Kramer obtained the support he had hoped for. In fact he was subsequently expelled from the city of Innsbruck by the local bishop, George Golzer, who ordered Kramer to stop making false accusations. Golzer described Kramer as senile in letters written shortly after the incident. This rebuke led Kramer to write a justification of his views on witchcraft in his 1486 book Malleus Maleficarum (\"Hammer against witches\"). In the book, Kramer stated his view that witchcraft was to blame for bad weather. The book is also noted for its animus against women. Despite Kramer's claim that the book gained acceptance from the clergy at the University of Cologne, it was in fact condemned by the clergy at Cologne for advocating views that violated Catholic doctrine and standard inquisitorial procedure. In 1538 the Spanish Inquisition cautioned its members not to believe everything the Malleus said.",
"title": "Early modern European history"
},
{
"paragraph_id": 40,
"text": "Portugal and Spain in the late Middle Ages consisted largely of multicultural territories of Muslim and Jewish influence, reconquered from Islamic control, and the new Christian authorities could not assume that all their subjects would suddenly become and remain orthodox Catholics. So the Inquisition in Iberia, in the lands of the Reconquista counties and kingdoms like León, Castile, and Aragon, had a special socio-political basis as well as more fundamental religious motives.",
"title": "Early modern European history"
},
{
"paragraph_id": 41,
"text": "In some parts of Spain towards the end of the 14th century, there was a wave of violent anti-Judaism, encouraged by the preaching of Ferrand Martínez, Archdeacon of Écija. In the pogroms of June 1391 in Seville, hundreds of Jews were killed, and the synagogue was completely destroyed. The number of people killed was also high in other cities, such as Córdoba, Valencia, and Barcelona.",
"title": "Early modern European history"
},
{
"paragraph_id": 42,
"text": "One of the consequences of these pogroms was the mass conversion of thousands of surviving Jews. Forced baptism was contrary to the law of the Catholic Church, and theoretically anybody who had been forcibly baptized could legally return to Judaism. However, this was very narrowly interpreted. Legal definitions of the time theoretically acknowledged that a forced baptism was not a valid sacrament, but confined this to cases where it was literally administered by physical force. A person who had consented to baptism under threat of death or serious injury was still regarded as a voluntary convert, and accordingly forbidden to revert to Judaism. After the public violence, many of the converted \"felt it safer to remain in their new religion\". Thus, after 1391, a new social group appeared and were referred to as conversos or New Christians.",
"title": "Early modern European history"
},
{
"paragraph_id": 43,
"text": "King Ferdinand II of Aragon and Queen Isabella I of Castile established the Spanish Inquisition in 1478. In contrast to the previous inquisitions, it operated completely under royal Christian authority, though staffed by clergy and orders, and independently of the Holy See. It operated in Spain and in most Spanish colonies and territories, which included the Canary Islands, the Kingdom of Sicily, and all Spanish possessions in North, Central, and South America. It primarily focused upon forced converts from Islam (Moriscos, Conversos, and \"secret Moors\") and from Judaism (Conversos, Crypto-Jews, and Marranos)—both groups still resided in Spain after the end of the Islamic control of Spain—who came under suspicion of either continuing to adhere to their old religion or of having fallen back into it.",
"title": "Early modern European history"
},
{
"paragraph_id": 44,
"text": "All Jews who had not converted were expelled from Spain in 1492, and all Muslims ordered to convert in different stages starting in 1501. Those who converted or simply remained after the relevant edict became nominally and legally Catholics, and thus subject to the Inquisition.",
"title": "Early modern European history"
},
{
"paragraph_id": 45,
"text": "In 1569, King Philip II of Spain set up three tribunals in the Americas (each formally titled Tribunal del Santo Oficio de la Inquisición): one in Mexico, one in Cartagena de Indias (in modern-day Colombia), and onw in Peru. The Mexican office administered Mexico (central and southeastern Mexico), Nueva Galicia (northern and western Mexico), the Audiencias of Guatemala (Guatemala, Chiapas, El Salvador, Honduras, Nicaragua, Costa Rica), and the Spanish East Indies. The Peruvian Inquisition, based in Lima, administered all the Spanish territories in South America and Panama.",
"title": "Early modern European history"
},
{
"paragraph_id": 46,
"text": "The Portuguese Inquisition formally started in Portugal in 1536 at the request of King João III. Manuel I had asked Pope Leo X for the installation of the Inquisition in 1515, but only after his death in 1521 did Pope Paul III acquiesce. At its head stood a Grande Inquisidor, or General Inquisitor, named by the Pope but selected by the Crown, and always from within the royal family. The Portuguese Inquisition principally focused upon the Sephardi Jews, whom the state forced to convert to Christianity. Spain had expelled its Sephardi population in 1492; many of these Spanish Jews left Spain for Portugal but eventually were subject to inquisition there as well.",
"title": "Early modern European history"
},
{
"paragraph_id": 47,
"text": "The Portuguese Inquisition held its first auto-da-fé in 1540. The Portuguese inquisitors mostly focused upon the Jewish New Christians (i.e. conversos or marranos). The Portuguese Inquisition expanded its scope of operations from Portugal to its colonial possessions, including Brazil, Cape Verde, and Goa. In the colonies, it continued as a religious court, investigating and trying cases of breaches of the tenets of orthodox Catholicism until 1821. King João III (reigned 1521–57) extended the activity of the courts to cover censorship, divination, witchcraft, and bigamy. Originally oriented for a religious action, the Inquisition exerted an influence over almost every aspect of Portuguese society: political, cultural, and social.",
"title": "Early modern European history"
},
{
"paragraph_id": 48,
"text": "According to Henry Charles Lea, between 1540 and 1794, tribunals in Lisbon, Porto, Coimbra, and Évora resulted in the burning of 1,175 persons, the burning of another 633 in effigy, and the penancing of 29,590. But documentation of 15 out of 689 autos-da-fé has disappeared, so these numbers may slightly understate the activity.",
"title": "Early modern European history"
},
{
"paragraph_id": 49,
"text": "The Goa Inquisition began in 1560 at the order of John III of Portugal. It had originally been requested in a letter in the 1540s by Jesuit priest Francis Xavier, because of the New Christians who had arrived in Goa and then reverted to Judaism. The Goa Inquisition also focused upon Catholic converts from Hinduism or Islam who were thought to have returned to their original ways. In addition, this inquisition prosecuted non-converts who broke prohibitions against the public observance of Hindu or Muslim rites or interfered with Portuguese attempts to convert non-Christians to Catholicism. Aleixo Dias Falcão and Francisco Marques set it up in the palace of the Sabaio Adil Khan.",
"title": "Early modern European history"
},
{
"paragraph_id": 50,
"text": "The inquisition was active in colonial Brazil. The religious mystic and formerly enslaved prostitute, Rosa Egipcíaca was arrested, interrogated and imprisoned, both in the colony and in Lisbon. Egipcíaca was the first black woman in Brazil to write a book - this work detailed her visions and was entitled Sagrada Teologia do Amor Divino das Almas Peregrinas.",
"title": "Early modern European history"
},
{
"paragraph_id": 51,
"text": "With the Protestant Reformation, Catholic authorities became much more ready to suspect heresy in any new ideas, including those of Renaissance humanism, previously strongly supported by many at the top of the Church hierarchy. The extirpation of heretics became a much broader and more complex enterprise, complicated by the politics of territorial Protestant powers, especially in northern Europe. The Catholic Church could no longer exercise direct influence in the politics and justice-systems of lands that officially adopted Protestantism. Thus war (the French Wars of Religion, the Thirty Years' War), massacre (the St. Bartholomew's Day massacre) and the missional and propaganda work (by the Sacra congregatio de propaganda fide) of the Counter-Reformation came to play larger roles in these circumstances, and the Roman law type of a \"judicial\" approach to heresy represented by the Inquisition became less important overall. In 1542 Pope Paul III established the Congregation of the Holy Office of the Inquisition as a permanent congregation staffed with cardinals and other officials. It had the tasks of maintaining and defending the integrity of the faith and of examining and proscribing errors and false doctrines; it thus became the supervisory body of local Inquisitions. A famous case tried by the Roman Inquisition was that of Galileo Galilei in 1633.",
"title": "Early modern European history"
},
{
"paragraph_id": 52,
"text": "The penances and sentences for those who confessed or were found guilty were pronounced together in a public ceremony at the end of all the processes. This was the sermo generalis or auto-da-fé. Penances (not matters for the civil authorities) might consist of a pilgrimage, a public scourging, a fine, or the wearing of a cross. The wearing of two tongues of red or other brightly colored cloth, sewn onto an outer garment in an \"X\" pattern, marked those who were under investigation. The penalties in serious cases were confiscation of property by the Inquisition or imprisonment. This led to the possibility of false charges to enable confiscation being made against those over a certain income, particularly rich marranos. Following the French invasion of 1798, the new authorities sent 3,000 chests containing over 100,000 Inquisition documents to France from Rome.",
"title": "Early modern European history"
},
{
"paragraph_id": 53,
"text": "By decree of Napoleon's government in 1797, the Inquisition in Venice was abolished in 1806.",
"title": "Ending of the Inquisition in the 19th and 20th centuries"
},
{
"paragraph_id": 54,
"text": "In Portugal, in the wake of the Liberal Revolution of 1820, the \"General Extraordinary and Constituent Courts of the Portuguese Nation\" abolished the Portuguese inquisition in 1821.",
"title": "Ending of the Inquisition in the 19th and 20th centuries"
},
{
"paragraph_id": 55,
"text": "The wars of independence of the former Spanish colonies in the Americas concluded with the abolition of the Inquisition in every quarter of Hispanic America between 1813 and 1825.",
"title": "Ending of the Inquisition in the 19th and 20th centuries"
},
{
"paragraph_id": 56,
"text": "The last execution of the Inquisition was in Spain in 1826. This was the execution by garroting of the Catalan school teacher Gaietà Ripoll for purportedly teaching Deism in his school. In Spain the practices of the Inquisition were finally outlawed in 1834.",
"title": "Ending of the Inquisition in the 19th and 20th centuries"
},
{
"paragraph_id": 57,
"text": "In Italy, the restoration of the Pope as the ruler of the Papal States in 1814 brought back the Inquisition to the Papal States. It remained active there until the late-19th century, notably in the well-publicised Mortara affair (1858–1870). In 1908 the name of the Congregation became \"The Sacred Congregation of the Holy Office\", which in 1965 further changed to \"Congregation for the Doctrine of the Faith\", as retained to the present day.",
"title": "Ending of the Inquisition in the 19th and 20th centuries"
},
{
"paragraph_id": 58,
"text": "Defendants were commonly interrogated under torture and finally punished if found guilty, with their property being requisitioned in the process to defray legal costs and prison costs. They could also repent of their accusation and receive reconciliation with the Church. The execution of the tortures was attended by the inquisitor, the doctor, the secretary and the executioner, applying them (except in the case of women) on the completely naked prisoner. In the year 1252, the bull Ad extirpanda allowed torture, but always with a doctor involved to avoid endangering life, and limited its use to three methods (not one of which was bloody):",
"title": "Methods of torture used"
},
{
"paragraph_id": 59,
"text": "According to the Catholic Church, the method of torture (which was socially accepted in the context of the time) was adopted only in exceptional cases. The inquisitorial procedure was meticulously regulated in interrogation practices.",
"title": "Methods of torture used"
},
{
"paragraph_id": 60,
"text": "Not all civilly accepted methods of torture were endorsed by the Catholic Church, and for a defendant to be sent to torture, he must be prosecuted for a crime considered serious, and the court must also have well-founded suspicions of his guilt. None of these were originated by the Holy Office; rather they were used by civil authorities.",
"title": "Methods of torture used"
},
{
"paragraph_id": 61,
"text": "Despite the use of torture, the inquisitorial procedure represents a breakthrough in the history of legislation. On the one hand, it definitely ruled out the use of the ordeal, a Germanic tradition long condemned by the hierarchy, without taking disciplinary measures against it, as a means of obtaining evidence, replacing it with the principle of testimonial evidence, which is still in force in current laws. On the other hand, the principle of the State as prosecutor or accusing party is restored. Until that time, it was the victim who had to prove the guilt of his aggressor, even in the most serious criminal proceedings, this was often very difficult when the victim was weak and the criminal powerful. But in the Inquisition, the victim is no more than a simple witness, as happens in countries where an inquisitive system is applied. It was the ecclesiastical authority who now had the burden of proof.. The summary of the Directorium Inquisitorum, by Nicolás Aymerich, made by Marchena, notes a comment by the Aragonese inquisitor: Quaestiones sunt fallaces et inefficaces (\"The interrogations are misleading and useless\").",
"title": "Methods of torture used"
},
{
"paragraph_id": 62,
"text": "Despite what is popularly believed, the cases in which torture was used during the inquisitorial processes were rare, since it was considered to be ineffective in obtaining evidence. In addition, in the vast majority of cases, display of torture instruments mainly had the purpose of intimidation of the accused, their use being more the exception than the norm.",
"title": "Methods of torture used"
},
{
"paragraph_id": 63,
"text": "In the words of historian Helen Mary Carrel: \"the common view of the medieval justice system as cruel and based on torture and execution is often unfair and inaccurate.\" As the historian Nigel Townson wrote: \"The sinister torture chambers equipped with cogwheels, bone crushing contraptions, shackles, and other terrifying mechanisms only existed in the imagination of their detractors.\"",
"title": "Methods of torture used"
},
{
"paragraph_id": 64,
"text": "Some instruments of torture awarded to the Inquisition, actually originated in the Protestant churches, or were and/or used by civil authorities, such as those presented in the Constitutio Criminalis Theresiana (of the Habsburg Monarchy) or the Ordonnance de Blois (of the Parlement of Paris). These were modern, not medieval, inventions that were not related to the Inquisition.",
"title": "Methods of torture used"
},
{
"paragraph_id": 65,
"text": "Many were designed by late 18th and early 19th century pranksters, entertainers, and con artists who wanted to profit from people's morbid interest in the Dark Age myth by charging them to witness such instruments in Victorian-era circuses.",
"title": "Methods of torture used"
},
{
"paragraph_id": 66,
"text": "However, several torture instruments are accurately described in Foxe's Book of Martyrs, including but not limited to the dry pan.",
"title": "Methods of torture used"
},
{
"paragraph_id": 67,
"text": "Some of the instruments that the Inquisition never used, but that are erroneously registered in various inquisition museums:",
"title": "Methods of torture used"
},
{
"paragraph_id": 68,
"text": "Beginning in the 19th century, historians have gradually compiled statistics drawn from the surviving court records, from which estimates have been calculated by adjusting the recorded number of convictions by the average rate of document loss for each time period. Gustav Henningsen and Jaime Contreras studied the records of the Spanish Inquisition, which list 44,674 cases of which 826 resulted in executions in person and 778 in effigy (i.e. a straw dummy was burned in place of the person). William Monter estimated there were 1000 executions between 1530–1630 and 250 between 1630 and 1730. Jean-Pierre Dedieu studied the records of Toledo's tribunal, which put 12,000 people on trial. For the period prior to 1530, Henry Kamen estimated there were about 2,000 executions in all of Spain's tribunals. Italian Renaissance history professor and Inquisition expert Carlo Ginzburg had his doubts about using statistics to reach a judgment about the period. \"In many cases, we don't have the evidence, the evidence has been lost,\" said Ginzburg.",
"title": "Statistics"
}
]
| The Inquisition was a group of institutions within the Catholic Church whose aim was to combat heresy, conducting trials of suspected heretics. Studies of the records have found that the overwhelming majority of sentences consisted of penances, but convictions of unrepentant heresy were handed over to the secular courts, which generally resulted in execution or life imprisonment. The Inquisition had its start in the 12th-century Kingdom of France, with the aim of combating religious deviation, particularly among the Cathars and the Waldensians. The inquisitorial courts from this time until the mid-15th century are together known as the Medieval Inquisition. Other groups investigated during the Medieval Inquisition, which primarily took place in France and Italy, include the Spiritual Franciscans, the Hussites, and the Beguines. Beginning in the 1250s, inquisitors were generally chosen from members of the Dominican Order, replacing the earlier practice of using local clergy as judges. During the Late Middle Ages and the early Renaissance, the scope of the Inquisition grew significantly in response to the Protestant Reformation and the Catholic Counter-Reformation. During this period, the Inquisition conducted by the Holy See was known as the Roman Inquisition. The Inquisition also expanded to other European countries, resulting in the Spanish Inquisition and the Portuguese Inquisition. The Spanish and Portuguese Inquisitions were instead focused particularly on the New Christians or Conversos, as the former Jews who converted to Christianity to avoid antisemitic regulations and persecution were called, the anusim and on Muslim converts to Catholicism. The scale of the persecution of converted Muslims and converted Jews in Spain and Portugal was the result of suspicions that they had secretly reverted to their previous religions, although both religious minority groups were also more numerous on the Iberian Peninsula than in other parts of Europe, as well as the fear of possible rebellions and armed uprisings, as had occurred in previous times. During this time, Spain and Portugal operated inquisitorial courts not only in Europe, but also throughout their empires in Africa, Asia, and the Americas. This resulted in the Goa Inquisition, the Peruvian Inquisition, and the Mexican Inquisition, among others. With the exception of the Papal States, the institution of the Inquisition was abolished in the early 19th century, after the Napoleonic Wars in Europe and the Spanish American wars of independence in the Americas. The institution survived as part of the Roman Curia, but in 1908 it was renamed the Supreme Sacred Congregation of the Holy Office. In 1965, it became the Congregation for the Doctrine of the Faith. In 2022, this office was renamed the Dicastery for the Doctrine of the Faith. | 2001-10-26T17:05:57Z | 2023-12-21T02:12:07Z | [
"Template:About",
"Template:Anchor",
"Template:See also",
"Template:Cite book",
"Template:Commons category",
"Template:Citation needed",
"Template:Cite web",
"Template:ISBN",
"Template:Cite journal",
"Template:Antisemitism footer",
"Template:Catholic Church",
"Template:Lang",
"Template:Webarchive",
"Template:Wikiquote",
"Template:Short description",
"Template:Main",
"Template:As of",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Citation",
"Template:Wikisource",
"Template:Christian History",
"Template:History of the Catholic Church",
"Template:Religious persecution",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Inquisition |
15,192 | Isaac | Isaac is one of the three patriarchs of the Israelites and an important figure in the Abrahamic religions, including Judaism, Christianity, and Islam. He was the son of Abraham and Sarah, the father of Jacob and Esau, and the grandfather of the twelve tribes of Israel.
Isaac's name means "he will laugh", reflecting the laughter, in disbelief, of Abraham and Sarah, when told by God that they would have a child. He is the only patriarch whose name was not changed, and the only one who did not move out of Canaan. According to the narrative, he died aged 180, the longest-lived of the three patriarchs.
The anglicized name "Isaac" is a transliteration of the Hebrew name יִצְחָק (Yīṣḥāq) which literally means "He laughs/will laugh". Ugaritic texts dating from the 13th century BCE refer to the benevolent smile of the Canaanite deity El. Genesis, however, ascribes the laughter to Isaac's parents, Abraham and Sarah, rather than El. According to the biblical narrative, Abraham fell on his face and laughed when God (Hebrew, Elohim) imparted the news of their son's eventual birth. He laughed because Sarah was past the age of childbearing; both she and Abraham were advanced in age. Later, when Sarah overheard three messengers of the Lord renew the promise, she laughed inwardly for the same reason. Sarah denied laughing when God questioned Abraham about it.
After God changes Abram and Sarai's names to Abraham and Sarah, he tells Abraham that he will bear a second son by Sarah named Isaac, with whom a new covenant would be established. In response, Abraham began to laugh, as both he and Sarah were well beyond natural child-bearing age. Some time later, three men who Abraham identifies as messengers of God visit him and Sarah, and Abraham treats them to food and niceties. They repeat the prophecy that Sarah would bear a child, promising Isaac's birth within a year's time, at which point Sarah laughs in disbelief. God questions why the pair laughed in disbelief at his words, and if it is because they believe such things were not within his power. Now afraid, they futilely deny ever having laughed at God's words.
Time passes as Isaac is born. Isaac was Abraham's second son, as Hagar was the mother of his first son, Ishmael. Isaac was Sarah's first and only child.
On the eighth day from his birth, Isaac was circumcised, as was necessary for all males of Abraham's household, in order to be in compliance with Yahweh's covenant.
After Isaac had been weaned, Sarah saw Ishmael playing with him, and urged her husband to cast out Hagar the bondservant and her son, so that Isaac would be Abraham's sole heir. Abraham was hesitant, but at God's order he listened to his wife's request.
At some point in Isaac's youth, his father Abraham took him to Mount Moriah. At God's command, Abraham was to build a sacrificial altar and sacrifice his son Isaac upon it. After he had bound his son to the altar and drawn his knife to kill him, at the last moment an angel of God prevented Abraham from proceeding. Instead, he was directed to sacrifice a nearby ram that was stuck in thickets.
Before Isaac was 40 (Genesis 25:20), Abraham sent Eliezer, his steward, into Mesopotamia to find a wife for Isaac, from his nephew Bethuel's family. Eliezer chose the Aramean Rebekah for Isaac. After many years of marriage to Isaac, Rebekah had still not given birth to a child and was believed to be barren. Isaac prayed for her and she conceived. Rebekah gave birth to twin boys, Esau and Jacob. Isaac was 60 years old when his two sons were born. Isaac favored Esau, and Rebekah favored Jacob.
The narratives about Isaac do not mention his having concubines.
Isaac moved to Beer-lahai-roi after his father died. When the land experienced famine, he moved to the Philistine land of Gerar where his father once lived. This land was still under the control of King Abimelech as it was in the days of Abraham. Like his father, Isaac also pretended that Rebekah was his sister due to fear that Abimelech would kill him in order to take her. He had gone back to all of the wells that his father dug and saw that they were all stopped up with earth. The Philistines did this after Abraham died. So, Isaac unearthed them and began to dig for more wells all the way to Beersheba, where he made a pact with Abimelech, just like in the day of his father.
Isaac grew old and became blind. He called his son Esau and directed him to procure some venison for him, in order to receive Isaac's blessing. While Esau was hunting, Jacob, after listening to his mother's advice, deceived his blind father by misrepresenting himself as Esau and thereby obtained his father's blessing, such that Jacob became Isaac's primary heir and Esau was left in an inferior position. According to Genesis 25:29–34, Esau had previously sold his birthright to Jacob for "bread and stew of lentils". Thereafter, Isaac sent Jacob into Mesopotamia to take a wife of his mother's brother's house. After 20 years working for his uncle Laban, Jacob returned home. He reconciled with his twin brother Esau, then he and Esau buried their father, Isaac, in Hebron after he died at the age of 180.
According to local tradition, the graves of Isaac and Rebekah, along with the graves of Abraham and Sarah and Jacob and Leah, are in the Cave of the Patriarchs.
In rabbinical tradition, the age of Isaac at the time of binding is taken to be 37, which contrasts with common portrayals of Isaac as a child. The rabbis also thought that the reason for the death of Sarah was the news of the intended sacrifice of Isaac. The sacrifice of Isaac is cited in appeals for the mercy of God in later Jewish traditions. The post-biblical Jewish interpretations often elaborate the role of Isaac beyond the biblical description and primarily focus on Abraham's intended sacrifice of Isaac, called the aqedah ("binding"). According to a version of these interpretations, Isaac died in the sacrifice and was revived. According to many accounts of Aggadah, unlike the Bible, it is Satan who is testing Isaac as an agent of God. Isaac's willingness to follow God's command at the cost of his death has been a model for many Jews who preferred martyrdom to violation of the Jewish law.
According to the Jewish tradition, Isaac instituted the afternoon prayer. This tradition is based on Genesis chapter 24, verse 63 ("Isaac went out to meditate in the field at the eventide").
Isaac was the only patriarch who stayed in Canaan during his whole life and though once he tried to leave, God told him not to do so. Rabbinic tradition gave the explanation that Isaac was almost sacrificed and anything dedicated as a sacrifice may not leave the Land of Israel. Isaac was the oldest of the biblical patriarchs at the time of his death, and the only patriarch whose name was not changed.
Rabbinic literature also linked Isaac's blindness in old age, as stated in the Bible, to the sacrificial binding: Isaac's eyes went blind because the tears of angels present at the time of his sacrifice fell on Isaac's eyes.
The early Christian church continued and developed the New Testament theme of Isaac as a type of Christ and the Church being both "the son of the promise" and the "father of the faithful". Tertullian draws a parallel between Isaac's bearing the wood for the sacrificial fire with Christ's carrying his cross. and there was a general agreement that, while all the sacrifices of the Old Law were anticipations of that on Calvary, the sacrifice of Isaac was so "in a pre-eminent way".
The Eastern Orthodox Church and the Roman Catholic Church consider Isaac as a saint along with other biblical patriarchs. Along with those of other patriarchs and the Old Testament Righteous, his feast day is celebrated in the Eastern Orthodox Church and the Byzantine rite of the Catholic Church on the Second Sunday before Christmas (December 11–17), under the title the Sunday of the Forefathers.
Isaac is commemorated in the Catholic Church on 25 March or on 17 December.
The New Testament states Isaac was "offered up" by his father Abraham, and that Isaac blessed his sons. Paul contrasted Isaac, symbolizing Christian liberty, with the rejected older son Ishmael, symbolizing slavery; Hagar is associated with the Sinai covenant, while Sarah is associated with the covenant of grace, into which her son Isaac enters. The Epistle of James chapter 2, verses 21–24, states that the sacrifice of Isaac shows that justification (in the Johannine sense) requires both faith and works.
In the Epistle to the Hebrews, Abraham's willingness to follow God's command to sacrifice Isaac is used as an example of faith as is Isaac's action in blessing Jacob and Esau with reference to the future promised by God to Abraham. In verse 19, the author views the release of Isaac from sacrifice as analogous to the resurrection of Jesus, the idea of the sacrifice of Isaac being a prefigurement of the sacrifice of Jesus on the cross.
Islam considers Isaac (Arabic: إسحاق, romanized: Isḥāq) a prophet, and describes him as the father of the Israelites and a righteous servant of God.
Isaac, along with Ishmael, is highly important for Muslims for continuing to preach the message of monotheism after his father Abraham. Among Isaac's children was the follow-up Israelite patriarch Jacob, who is also venerated as an Islamic prophet.
Isaac is mentioned seventeen times by name in the Quran, often with his father and his son, Jacob. The Quran states that Abraham received "good tidings of Isaac, a prophet, of the righteous", and that God blessed them both (37:112). In a fuller description, when angels came to Abraham to tell him of the future punishment to be imposed on Sodom and Gomorrah, his wife, Sarah, "laughed, and We gave her good tidings of Isaac, and after Isaac of (a grandson) Jacob" (11:71–74); and it is further explained that this event will take place despite Abraham and Sarah's old age. Several verses speak of Isaac as a "gift" to Abraham (6:84; 14:49–50), and 24:26–27 adds that God made "prophethood and the Book to be among his offspring", which has been interpreted to refer to Abraham's two prophetic sons, his prophetic grandson Jacob, and his prophetic great-grandson Joseph. In the Quran, it later narrates that Abraham also praised God for giving him Ishmael and Isaac in his old age (14:39–41).
Elsewhere in the Quran, Isaac is mentioned in lists: Joseph follows the religion of his forefathers Abraham, Isaac and Jacob (12:38) and speaks of God's favor to them (12:6); Jacob's sons all testify their faith and promise to worship the God that their forefathers, "Abraham, Ishmael and Isaac", worshiped (2:127); and the Quran commands Muslims to believe in the revelations that were given to "Abraham, Ishmael, Isaac, Jacob and the Patriarchs" (2:136; 3:84). In the Quran's narrative of Abraham's near-sacrifice of his son (37:102), the name of the son is not mentioned and debate has continued over the son's identity, though many feel that the identity is the least important element in a story which is given to show the courage that one develops through faith.
The Quran mentions Isaac as a prophet and a righteous man of God. Isaac and Jacob are mentioned as being bestowed upon Abraham as gifts of God, who then worshipped God only and were righteous leaders in the way of God:
And We bestowed on him Isaac and, as an additional gift, (a grandson), Jacob, and We made righteous men of every one (of them). And We made them leaders, guiding (men) by Our Command, and We sent them inspiration to do good deeds, to establish regular prayers, and to practise regular charity; and they constantly served Us (and Us only).
And WE gave him the glad tidings of Isaac, a Prophet, and one of the righteous.
Some scholars have described Isaac as "a legendary figure" or "as a figure representing tribal history, or "as a seminomadic leader". The stories of Isaac, like other patriarchal stories of Genesis, are generally believed to have "their origin in folk memories and oral traditions of the early Hebrew pastoralist experience". The Cambridge Companion to the Bible makes the following comment on the biblical stories of the patriarchs:
Yet for all that these stories maintain a distance between their world and that of their time of literary growth and composition, they reflect the political realities of the later periods. Many of the narratives deal with the relationship between the ancestors and peoples who were part of Israel's political world at the time the stories began to be written down (eighth century B.C.E.). Lot is the ancestor of the Transjordanian peoples of Ammon and Moab, and Ishmael personifies the nomadic peoples known to have inhabited north Arabia, although located in the Old Testament in the Negev. Esau personifies Edom (36:1), and Laban represents the Aramean states to Israel's north. A persistent theme is that of difference between the ancestors and the indigenous Canaanites… In fact, the theme of the differences between Judah and Israel, as personified by the ancestors, and the neighboring peoples of the time of the monarchy is pressed effectively into theological service to articulate the choosing by God of Judah and Israel to bring blessing to all peoples.
According to Martin Noth, a scholar of the Hebrew Bible, the narratives of Isaac date back to an older cultural stage than that of the West-Jordanian Jacob. At that era, the Israelite tribes were not yet sedentary. In the course of looking for grazing areas, they had come in contact in southern Philistia with the inhabitants of the settled countryside. The biblical historian A. Jopsen believes in the connection between the Isaac traditions and the north, and in support of this theory adduces Amos 7:9 ("the high places of Isaac").
Albrecht Alt and Martin Noth hold that, "The figure of Isaac was enhanced when the theme of promise, previously bound to the cults of the 'God the Fathers' was incorporated into the Israelite creed during the southern-Palestinian stage of the growth of the Pentateuch tradition." According to Martin Noth, at the Southern Palestinian stage of the growth of the Pentateuch tradition, Isaac became established as one of the biblical patriarchs, but his traditions were receded in the favor of Abraham.
Scholars like Israel Finkelstein proposed that Isaac might be the ancestor worshipped in Beersheba and the oldest tradition about him might be the ancestor myth dating back to at least 8th century BCE as shown in Amos 7:9, while proposing that the story about him conflicting with Abimelech, king of Gerar, and Philistines, which is the story that has possibility that Abraham cycle could have vampirized or vice versa, could have been originated and have background in 7th century BCE, and could be made to aim at justifying and legitimizing the claim of Judah over the Judahite territories that are transferred to the Philistine cities by Sennacherib because of several reasons: it was time when Gerar(Tel Haror) had the special importance and fortified Assyrian administration center; there was king of Ashdod, Ahimilki, whose name resembles and reminds Abimelech; the Kingdom of Judah could have gotten back parts of Judahite territories back as Judah was compliant vassal of Assyria under Manasseh. In addition, Israel Finkelstein proposed that Abraham might be the ancestor worshipped in Hebron, and Jacob might be the ancestor worshipped in Israel, but the earliest tradition of Jacob, the tradition about him and his uncle Laban the Aramean establishing the border between them, might be originated in Gilead.
The earliest Christian portrayal of Isaac is found in the Roman catacomb frescoes. Excluding the fragments, Alison Moore Smith classifies these artistic works in three categories:
Abraham leads Isaac towards the altar; or Isaac approaches with the bundle of sticks, Abraham having preceded him to the place of offering ... Abraham is upon a pedestal and Isaac stands near at hand, both figures in orant attitude ... Abraham is shown about to sacrifice Isaac while the latter stands or kneels on the ground beside the altar. Sometimes Abraham grasps Isaac by the hair. Occasionally the ram is added to the scene and in the later paintings the Hand of God emerges from above. | [
{
"paragraph_id": 0,
"text": "Isaac is one of the three patriarchs of the Israelites and an important figure in the Abrahamic religions, including Judaism, Christianity, and Islam. He was the son of Abraham and Sarah, the father of Jacob and Esau, and the grandfather of the twelve tribes of Israel.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Isaac's name means \"he will laugh\", reflecting the laughter, in disbelief, of Abraham and Sarah, when told by God that they would have a child. He is the only patriarch whose name was not changed, and the only one who did not move out of Canaan. According to the narrative, he died aged 180, the longest-lived of the three patriarchs.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The anglicized name \"Isaac\" is a transliteration of the Hebrew name יִצְחָק (Yīṣḥāq) which literally means \"He laughs/will laugh\". Ugaritic texts dating from the 13th century BCE refer to the benevolent smile of the Canaanite deity El. Genesis, however, ascribes the laughter to Isaac's parents, Abraham and Sarah, rather than El. According to the biblical narrative, Abraham fell on his face and laughed when God (Hebrew, Elohim) imparted the news of their son's eventual birth. He laughed because Sarah was past the age of childbearing; both she and Abraham were advanced in age. Later, when Sarah overheard three messengers of the Lord renew the promise, she laughed inwardly for the same reason. Sarah denied laughing when God questioned Abraham about it.",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "After God changes Abram and Sarai's names to Abraham and Sarah, he tells Abraham that he will bear a second son by Sarah named Isaac, with whom a new covenant would be established. In response, Abraham began to laugh, as both he and Sarah were well beyond natural child-bearing age. Some time later, three men who Abraham identifies as messengers of God visit him and Sarah, and Abraham treats them to food and niceties. They repeat the prophecy that Sarah would bear a child, promising Isaac's birth within a year's time, at which point Sarah laughs in disbelief. God questions why the pair laughed in disbelief at his words, and if it is because they believe such things were not within his power. Now afraid, they futilely deny ever having laughed at God's words.",
"title": "Genesis narrative"
},
{
"paragraph_id": 4,
"text": "Time passes as Isaac is born. Isaac was Abraham's second son, as Hagar was the mother of his first son, Ishmael. Isaac was Sarah's first and only child.",
"title": "Genesis narrative"
},
{
"paragraph_id": 5,
"text": "On the eighth day from his birth, Isaac was circumcised, as was necessary for all males of Abraham's household, in order to be in compliance with Yahweh's covenant.",
"title": "Genesis narrative"
},
{
"paragraph_id": 6,
"text": "After Isaac had been weaned, Sarah saw Ishmael playing with him, and urged her husband to cast out Hagar the bondservant and her son, so that Isaac would be Abraham's sole heir. Abraham was hesitant, but at God's order he listened to his wife's request.",
"title": "Genesis narrative"
},
{
"paragraph_id": 7,
"text": "At some point in Isaac's youth, his father Abraham took him to Mount Moriah. At God's command, Abraham was to build a sacrificial altar and sacrifice his son Isaac upon it. After he had bound his son to the altar and drawn his knife to kill him, at the last moment an angel of God prevented Abraham from proceeding. Instead, he was directed to sacrifice a nearby ram that was stuck in thickets.",
"title": "Genesis narrative"
},
{
"paragraph_id": 8,
"text": "Before Isaac was 40 (Genesis 25:20), Abraham sent Eliezer, his steward, into Mesopotamia to find a wife for Isaac, from his nephew Bethuel's family. Eliezer chose the Aramean Rebekah for Isaac. After many years of marriage to Isaac, Rebekah had still not given birth to a child and was believed to be barren. Isaac prayed for her and she conceived. Rebekah gave birth to twin boys, Esau and Jacob. Isaac was 60 years old when his two sons were born. Isaac favored Esau, and Rebekah favored Jacob.",
"title": "Genesis narrative"
},
{
"paragraph_id": 9,
"text": "The narratives about Isaac do not mention his having concubines.",
"title": "Genesis narrative"
},
{
"paragraph_id": 10,
"text": "Isaac moved to Beer-lahai-roi after his father died. When the land experienced famine, he moved to the Philistine land of Gerar where his father once lived. This land was still under the control of King Abimelech as it was in the days of Abraham. Like his father, Isaac also pretended that Rebekah was his sister due to fear that Abimelech would kill him in order to take her. He had gone back to all of the wells that his father dug and saw that they were all stopped up with earth. The Philistines did this after Abraham died. So, Isaac unearthed them and began to dig for more wells all the way to Beersheba, where he made a pact with Abimelech, just like in the day of his father.",
"title": "Genesis narrative"
},
{
"paragraph_id": 11,
"text": "Isaac grew old and became blind. He called his son Esau and directed him to procure some venison for him, in order to receive Isaac's blessing. While Esau was hunting, Jacob, after listening to his mother's advice, deceived his blind father by misrepresenting himself as Esau and thereby obtained his father's blessing, such that Jacob became Isaac's primary heir and Esau was left in an inferior position. According to Genesis 25:29–34, Esau had previously sold his birthright to Jacob for \"bread and stew of lentils\". Thereafter, Isaac sent Jacob into Mesopotamia to take a wife of his mother's brother's house. After 20 years working for his uncle Laban, Jacob returned home. He reconciled with his twin brother Esau, then he and Esau buried their father, Isaac, in Hebron after he died at the age of 180.",
"title": "Genesis narrative"
},
{
"paragraph_id": 12,
"text": "According to local tradition, the graves of Isaac and Rebekah, along with the graves of Abraham and Sarah and Jacob and Leah, are in the Cave of the Patriarchs.",
"title": "Burial site"
},
{
"paragraph_id": 13,
"text": "In rabbinical tradition, the age of Isaac at the time of binding is taken to be 37, which contrasts with common portrayals of Isaac as a child. The rabbis also thought that the reason for the death of Sarah was the news of the intended sacrifice of Isaac. The sacrifice of Isaac is cited in appeals for the mercy of God in later Jewish traditions. The post-biblical Jewish interpretations often elaborate the role of Isaac beyond the biblical description and primarily focus on Abraham's intended sacrifice of Isaac, called the aqedah (\"binding\"). According to a version of these interpretations, Isaac died in the sacrifice and was revived. According to many accounts of Aggadah, unlike the Bible, it is Satan who is testing Isaac as an agent of God. Isaac's willingness to follow God's command at the cost of his death has been a model for many Jews who preferred martyrdom to violation of the Jewish law.",
"title": "Jewish views"
},
{
"paragraph_id": 14,
"text": "According to the Jewish tradition, Isaac instituted the afternoon prayer. This tradition is based on Genesis chapter 24, verse 63 (\"Isaac went out to meditate in the field at the eventide\").",
"title": "Jewish views"
},
{
"paragraph_id": 15,
"text": "Isaac was the only patriarch who stayed in Canaan during his whole life and though once he tried to leave, God told him not to do so. Rabbinic tradition gave the explanation that Isaac was almost sacrificed and anything dedicated as a sacrifice may not leave the Land of Israel. Isaac was the oldest of the biblical patriarchs at the time of his death, and the only patriarch whose name was not changed.",
"title": "Jewish views"
},
{
"paragraph_id": 16,
"text": "Rabbinic literature also linked Isaac's blindness in old age, as stated in the Bible, to the sacrificial binding: Isaac's eyes went blind because the tears of angels present at the time of his sacrifice fell on Isaac's eyes.",
"title": "Jewish views"
},
{
"paragraph_id": 17,
"text": "The early Christian church continued and developed the New Testament theme of Isaac as a type of Christ and the Church being both \"the son of the promise\" and the \"father of the faithful\". Tertullian draws a parallel between Isaac's bearing the wood for the sacrificial fire with Christ's carrying his cross. and there was a general agreement that, while all the sacrifices of the Old Law were anticipations of that on Calvary, the sacrifice of Isaac was so \"in a pre-eminent way\".",
"title": "Christian views"
},
{
"paragraph_id": 18,
"text": "The Eastern Orthodox Church and the Roman Catholic Church consider Isaac as a saint along with other biblical patriarchs. Along with those of other patriarchs and the Old Testament Righteous, his feast day is celebrated in the Eastern Orthodox Church and the Byzantine rite of the Catholic Church on the Second Sunday before Christmas (December 11–17), under the title the Sunday of the Forefathers.",
"title": "Christian views"
},
{
"paragraph_id": 19,
"text": "Isaac is commemorated in the Catholic Church on 25 March or on 17 December.",
"title": "Christian views"
},
{
"paragraph_id": 20,
"text": "The New Testament states Isaac was \"offered up\" by his father Abraham, and that Isaac blessed his sons. Paul contrasted Isaac, symbolizing Christian liberty, with the rejected older son Ishmael, symbolizing slavery; Hagar is associated with the Sinai covenant, while Sarah is associated with the covenant of grace, into which her son Isaac enters. The Epistle of James chapter 2, verses 21–24, states that the sacrifice of Isaac shows that justification (in the Johannine sense) requires both faith and works.",
"title": "Christian views"
},
{
"paragraph_id": 21,
"text": "In the Epistle to the Hebrews, Abraham's willingness to follow God's command to sacrifice Isaac is used as an example of faith as is Isaac's action in blessing Jacob and Esau with reference to the future promised by God to Abraham. In verse 19, the author views the release of Isaac from sacrifice as analogous to the resurrection of Jesus, the idea of the sacrifice of Isaac being a prefigurement of the sacrifice of Jesus on the cross.",
"title": "Christian views"
},
{
"paragraph_id": 22,
"text": "Islam considers Isaac (Arabic: إسحاق, romanized: Isḥāq) a prophet, and describes him as the father of the Israelites and a righteous servant of God.",
"title": "Islamic views"
},
{
"paragraph_id": 23,
"text": "Isaac, along with Ishmael, is highly important for Muslims for continuing to preach the message of monotheism after his father Abraham. Among Isaac's children was the follow-up Israelite patriarch Jacob, who is also venerated as an Islamic prophet.",
"title": "Islamic views"
},
{
"paragraph_id": 24,
"text": "Isaac is mentioned seventeen times by name in the Quran, often with his father and his son, Jacob. The Quran states that Abraham received \"good tidings of Isaac, a prophet, of the righteous\", and that God blessed them both (37:112). In a fuller description, when angels came to Abraham to tell him of the future punishment to be imposed on Sodom and Gomorrah, his wife, Sarah, \"laughed, and We gave her good tidings of Isaac, and after Isaac of (a grandson) Jacob\" (11:71–74); and it is further explained that this event will take place despite Abraham and Sarah's old age. Several verses speak of Isaac as a \"gift\" to Abraham (6:84; 14:49–50), and 24:26–27 adds that God made \"prophethood and the Book to be among his offspring\", which has been interpreted to refer to Abraham's two prophetic sons, his prophetic grandson Jacob, and his prophetic great-grandson Joseph. In the Quran, it later narrates that Abraham also praised God for giving him Ishmael and Isaac in his old age (14:39–41).",
"title": "Islamic views"
},
{
"paragraph_id": 25,
"text": "Elsewhere in the Quran, Isaac is mentioned in lists: Joseph follows the religion of his forefathers Abraham, Isaac and Jacob (12:38) and speaks of God's favor to them (12:6); Jacob's sons all testify their faith and promise to worship the God that their forefathers, \"Abraham, Ishmael and Isaac\", worshiped (2:127); and the Quran commands Muslims to believe in the revelations that were given to \"Abraham, Ishmael, Isaac, Jacob and the Patriarchs\" (2:136; 3:84). In the Quran's narrative of Abraham's near-sacrifice of his son (37:102), the name of the son is not mentioned and debate has continued over the son's identity, though many feel that the identity is the least important element in a story which is given to show the courage that one develops through faith.",
"title": "Islamic views"
},
{
"paragraph_id": 26,
"text": "The Quran mentions Isaac as a prophet and a righteous man of God. Isaac and Jacob are mentioned as being bestowed upon Abraham as gifts of God, who then worshipped God only and were righteous leaders in the way of God:",
"title": "Islamic views"
},
{
"paragraph_id": 27,
"text": "And We bestowed on him Isaac and, as an additional gift, (a grandson), Jacob, and We made righteous men of every one (of them). And We made them leaders, guiding (men) by Our Command, and We sent them inspiration to do good deeds, to establish regular prayers, and to practise regular charity; and they constantly served Us (and Us only).",
"title": "Islamic views"
},
{
"paragraph_id": 28,
"text": "And WE gave him the glad tidings of Isaac, a Prophet, and one of the righteous.",
"title": "Islamic views"
},
{
"paragraph_id": 29,
"text": "Some scholars have described Isaac as \"a legendary figure\" or \"as a figure representing tribal history, or \"as a seminomadic leader\". The stories of Isaac, like other patriarchal stories of Genesis, are generally believed to have \"their origin in folk memories and oral traditions of the early Hebrew pastoralist experience\". The Cambridge Companion to the Bible makes the following comment on the biblical stories of the patriarchs:",
"title": "Academic"
},
{
"paragraph_id": 30,
"text": "Yet for all that these stories maintain a distance between their world and that of their time of literary growth and composition, they reflect the political realities of the later periods. Many of the narratives deal with the relationship between the ancestors and peoples who were part of Israel's political world at the time the stories began to be written down (eighth century B.C.E.). Lot is the ancestor of the Transjordanian peoples of Ammon and Moab, and Ishmael personifies the nomadic peoples known to have inhabited north Arabia, although located in the Old Testament in the Negev. Esau personifies Edom (36:1), and Laban represents the Aramean states to Israel's north. A persistent theme is that of difference between the ancestors and the indigenous Canaanites… In fact, the theme of the differences between Judah and Israel, as personified by the ancestors, and the neighboring peoples of the time of the monarchy is pressed effectively into theological service to articulate the choosing by God of Judah and Israel to bring blessing to all peoples.",
"title": "Academic"
},
{
"paragraph_id": 31,
"text": "According to Martin Noth, a scholar of the Hebrew Bible, the narratives of Isaac date back to an older cultural stage than that of the West-Jordanian Jacob. At that era, the Israelite tribes were not yet sedentary. In the course of looking for grazing areas, they had come in contact in southern Philistia with the inhabitants of the settled countryside. The biblical historian A. Jopsen believes in the connection between the Isaac traditions and the north, and in support of this theory adduces Amos 7:9 (\"the high places of Isaac\").",
"title": "Academic"
},
{
"paragraph_id": 32,
"text": "Albrecht Alt and Martin Noth hold that, \"The figure of Isaac was enhanced when the theme of promise, previously bound to the cults of the 'God the Fathers' was incorporated into the Israelite creed during the southern-Palestinian stage of the growth of the Pentateuch tradition.\" According to Martin Noth, at the Southern Palestinian stage of the growth of the Pentateuch tradition, Isaac became established as one of the biblical patriarchs, but his traditions were receded in the favor of Abraham.",
"title": "Academic"
},
{
"paragraph_id": 33,
"text": "Scholars like Israel Finkelstein proposed that Isaac might be the ancestor worshipped in Beersheba and the oldest tradition about him might be the ancestor myth dating back to at least 8th century BCE as shown in Amos 7:9, while proposing that the story about him conflicting with Abimelech, king of Gerar, and Philistines, which is the story that has possibility that Abraham cycle could have vampirized or vice versa, could have been originated and have background in 7th century BCE, and could be made to aim at justifying and legitimizing the claim of Judah over the Judahite territories that are transferred to the Philistine cities by Sennacherib because of several reasons: it was time when Gerar(Tel Haror) had the special importance and fortified Assyrian administration center; there was king of Ashdod, Ahimilki, whose name resembles and reminds Abimelech; the Kingdom of Judah could have gotten back parts of Judahite territories back as Judah was compliant vassal of Assyria under Manasseh. In addition, Israel Finkelstein proposed that Abraham might be the ancestor worshipped in Hebron, and Jacob might be the ancestor worshipped in Israel, but the earliest tradition of Jacob, the tradition about him and his uncle Laban the Aramean establishing the border between them, might be originated in Gilead.",
"title": "Academic"
},
{
"paragraph_id": 34,
"text": "The earliest Christian portrayal of Isaac is found in the Roman catacomb frescoes. Excluding the fragments, Alison Moore Smith classifies these artistic works in three categories:",
"title": "In art"
},
{
"paragraph_id": 35,
"text": "Abraham leads Isaac towards the altar; or Isaac approaches with the bundle of sticks, Abraham having preceded him to the place of offering ... Abraham is upon a pedestal and Isaac stands near at hand, both figures in orant attitude ... Abraham is shown about to sacrifice Isaac while the latter stands or kneels on the ground beside the altar. Sometimes Abraham grasps Isaac by the hair. Occasionally the ram is added to the scene and in the later paintings the Hand of God emerges from above.",
"title": "In art"
}
]
| Isaac is one of the three patriarchs of the Israelites and an important figure in the Abrahamic religions, including Judaism, Christianity, and Islam. He was the son of Abraham and Sarah, the father of Jacob and Esau, and the grandfather of the twelve tribes of Israel. Isaac's name means "he will laugh", reflecting the laughter, in disbelief, of Abraham and Sarah, when told by God that they would have a child. He is the only patriarch whose name was not changed, and the only one who did not move out of Canaan. According to the narrative, he died aged 180, the longest-lived of the three patriarchs. | 2001-10-26T05:52:11Z | 2023-12-22T13:05:42Z | [
"Template:For multi",
"Template:Transliteration",
"Template:Lang-ar",
"Template:Authority control",
"Template:Portal",
"Template:Sfn",
"Template:Reflist",
"Template:Cite web",
"Template:Prophets in the Quran",
"Template:Infobox person",
"Template:Efn",
"Template:Cite encyclopedia",
"Template:Prophets of the Tanakh",
"Template:Short description",
"Template:Commons category",
"Template:Cite CE1913",
"Template:Qref",
"Template:Catholic saints",
"Template:Notelist",
"Template:Bibleverse",
"Template:Adam to David",
"Template:Lang",
"Template:Main",
"Template:Blockquote",
"Template:Quote",
"Template:Cite EB1911",
"Template:Book of Genesis",
"Template:Good article",
"Template:Citation needed",
"Template:Cite book",
"Template:Cite journal"
]
| https://en.wikipedia.org/wiki/Isaac |
15,193 | Italian Football League | Italian Football League (IFL) is the top level American football league in Italy established in 1980.
The annual final play-off game to determine the league champion is called the Italian Bowl, that awards, for American football, the title of "champion of Italy" and the scudetto.
In Italy, the first American football game took place in Genoa on 27 November 1913 when the teams of the USS Connecticut and USS Kansas faced each other, two of the 14 ships of the American Great White Fleet temporarily docked in the Ligurian port during an exercise cruise in the Mediterranean Sea. USS Connecticut won 17–6.
After this sporadic appearance, American football returned to Italy with the Allied troops during World War II. American football followed the advance of the US units from the south to the north of the Italian peninsula. On 23 November 1944, a touch football match was played at the Stadio della Vittoria in Bari, between the Playboys and the Technical School. The trophy, called the "Bambino Bowl", was won by the Technical School 13–0 in front of an audience of 5,000.
A little over a month later, the Spaghetti Bowl was held in Florence in front of 25,000 people on New Year's Day 1945 between the Bridgebusters (representatives of the Twelfth Air Force) and the Mudders (United States Army North), who they won 20–0. Although probably other matches were played in those years of which no documentation remains, the first in peacetime, took place in Trieste, the last territory liberated from the Nazi-Fascists, in January 1948. The match was organized by the Trieste United States Troops and saw the SP'S prevail over D Company by "three touchdowns" (then 21–0).
In the 1970s teams formed and played in Italy. The first American football championship organized in Italy, which was never recognized by the federation, took place in 1977 and was won by the Tauri Torino [it].
Among the games played in the 1970s there was the first official match played between Italian American football teams in preparation for the first championship officially recognized by the federation; played on 24 June 1978 at the Stadio Carlo Speroni in Busto Arsizio, it was won 36–0 by the Rhinos Milano over the Gallarate Frogs.
In 1980 the first official American football league in Italy was established and crowned a champion. This championship did not include a final and was won by Lupi Roma [it]. However, the title of first champions of Italy was recognized to the Lupi only in 2016.
The Italian league (Series A) in the late 1970s and early 1980s, was one of the first leagues in Europe to sign professional import players and coaches from the US. The league had good popularity in the early years especially the late 1980s and early 1990s with reported attendance of nearly 20,000 fans for a Series A league final championship game in that time period. American Football in Italy has had ups and downs since that time but has always had a competitive league with different lower levels playing below the Italian Football League (IFL).
The new IFL was founded in 2007, taking over previous league's significance called (National Football League Italy). The league was born as a result of the escape of several of the best clubs of the old championship organized by the Italian federation, such as Milano Rhinos, Parma Panthers, Bologna Doves and Bolzano Giants. However some of the historic Italian clubs have not joined the new league and continue to participate in different tournaments organized by other federations.
In the following years a lot of teams moved to the Federazione Italiana di American Football [it] (the federation the IFL belongs to) and most of the biggest teams are now again part of the IFL that is the First Division or in the other two divisions.
The Bergamo Lions have won the most Italian Bowl league championships winning 12 finals.
On Saturday, July 1, 2023, Italian Bowl XLII will be played at the Glass Bowl Stadium on the campus of The University of Toledo, Toledo, Ohio, USA. This will mark the first Italian Football League Championship held outside of Europe. The Parma Panthers won the game played in front of nearly 10,000 fans and televised in the USA.
† defunct ♦ due to league expansion the Napoli team can play the 2015 IFL season and is not relegated to the second division ‡ Roma Grizzlies won the second division championship and earned the right to play the 2015 IFL season
Italian Bowl is the annual final play-off game of the Italian Football League (IFL) to determine the league champion. It is the game that awards, for American football, the title of "champion of Italy" and the scudetto. Until 2014 the championship game was called Italian Super Bowl | [
{
"paragraph_id": 0,
"text": "Italian Football League (IFL) is the top level American football league in Italy established in 1980.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The annual final play-off game to determine the league champion is called the Italian Bowl, that awards, for American football, the title of \"champion of Italy\" and the scudetto.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In Italy, the first American football game took place in Genoa on 27 November 1913 when the teams of the USS Connecticut and USS Kansas faced each other, two of the 14 ships of the American Great White Fleet temporarily docked in the Ligurian port during an exercise cruise in the Mediterranean Sea. USS Connecticut won 17–6.",
"title": "Background"
},
{
"paragraph_id": 3,
"text": "After this sporadic appearance, American football returned to Italy with the Allied troops during World War II. American football followed the advance of the US units from the south to the north of the Italian peninsula. On 23 November 1944, a touch football match was played at the Stadio della Vittoria in Bari, between the Playboys and the Technical School. The trophy, called the \"Bambino Bowl\", was won by the Technical School 13–0 in front of an audience of 5,000.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "A little over a month later, the Spaghetti Bowl was held in Florence in front of 25,000 people on New Year's Day 1945 between the Bridgebusters (representatives of the Twelfth Air Force) and the Mudders (United States Army North), who they won 20–0. Although probably other matches were played in those years of which no documentation remains, the first in peacetime, took place in Trieste, the last territory liberated from the Nazi-Fascists, in January 1948. The match was organized by the Trieste United States Troops and saw the SP'S prevail over D Company by \"three touchdowns\" (then 21–0).",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "In the 1970s teams formed and played in Italy. The first American football championship organized in Italy, which was never recognized by the federation, took place in 1977 and was won by the Tauri Torino [it].",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Among the games played in the 1970s there was the first official match played between Italian American football teams in preparation for the first championship officially recognized by the federation; played on 24 June 1978 at the Stadio Carlo Speroni in Busto Arsizio, it was won 36–0 by the Rhinos Milano over the Gallarate Frogs.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1980 the first official American football league in Italy was established and crowned a champion. This championship did not include a final and was won by Lupi Roma [it]. However, the title of first champions of Italy was recognized to the Lupi only in 2016.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Italian league (Series A) in the late 1970s and early 1980s, was one of the first leagues in Europe to sign professional import players and coaches from the US. The league had good popularity in the early years especially the late 1980s and early 1990s with reported attendance of nearly 20,000 fans for a Series A league final championship game in that time period. American Football in Italy has had ups and downs since that time but has always had a competitive league with different lower levels playing below the Italian Football League (IFL).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The new IFL was founded in 2007, taking over previous league's significance called (National Football League Italy). The league was born as a result of the escape of several of the best clubs of the old championship organized by the Italian federation, such as Milano Rhinos, Parma Panthers, Bologna Doves and Bolzano Giants. However some of the historic Italian clubs have not joined the new league and continue to participate in different tournaments organized by other federations.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the following years a lot of teams moved to the Federazione Italiana di American Football [it] (the federation the IFL belongs to) and most of the biggest teams are now again part of the IFL that is the First Division or in the other two divisions.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The Bergamo Lions have won the most Italian Bowl league championships winning 12 finals.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "On Saturday, July 1, 2023, Italian Bowl XLII will be played at the Glass Bowl Stadium on the campus of The University of Toledo, Toledo, Ohio, USA. This will mark the first Italian Football League Championship held outside of Europe. The Parma Panthers won the game played in front of nearly 10,000 fans and televised in the USA.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "† defunct ♦ due to league expansion the Napoli team can play the 2015 IFL season and is not relegated to the second division ‡ Roma Grizzlies won the second division championship and earned the right to play the 2015 IFL season",
"title": "IFL teams"
},
{
"paragraph_id": 14,
"text": "Italian Bowl is the annual final play-off game of the Italian Football League (IFL) to determine the league champion. It is the game that awards, for American football, the title of \"champion of Italy\" and the scudetto. Until 2014 the championship game was called Italian Super Bowl",
"title": "Italian Bowl"
}
]
| Italian Football League (IFL) is the top level American football league in Italy established in 1980. The annual final play-off game to determine the league champion is called the Italian Bowl, that awards, for American football, the title of "champion of Italy" and the scudetto. | 2001-10-26T10:57:25Z | 2023-07-29T08:27:19Z | [
"Template:Ill",
"Template:Clear",
"Template:Portal",
"Template:Cite web",
"Template:Commons category",
"Template:Top sport leagues in Italy",
"Template:European Football Leagues",
"Template:Short description",
"Template:Symbol",
"Template:0",
"Template:Location map ",
"Template:Reflist",
"Template:Webarchive",
"Template:Distinguish",
"Template:Infobox sports league",
"Template:Ndash"
]
| https://en.wikipedia.org/wiki/Italian_Football_League |
15,195 | Iduna | In Norse mythology, Iðunn is a goddess associated with apples and youth. Iðunn is attested in the Poetic Edda, compiled in the 13th century from earlier traditional sources, and the Prose Edda, written in the 13th century by Snorri Sturluson. In both sources, she is described as the wife of the skaldic god Bragi, and in the Prose Edda, also as a keeper of apples and granter of eternal youthfulness.
The Prose Edda relates how Loki was once forced by the jötunn Þjazi to lure Iðunn out of Asgard and into a wood with the promise of apples even fairer than her own. Þjazi, in the form of an eagle, abducts Iðunn from the wood, bearing her off to his home. Iðunn's absence causes the gods to grow old and grey, and they realize that Loki is responsible for her disappearance. Under duress, Loki promises to bring her back and, setting out in the form of a falcon, eventually finds her alone at Þjazi's home. He turns her into a nut and flies back toward Asgard. When Þjazi returns to find Iðunn gone, he assumes his eagle form once more and flies off in hot pursuit of Loki and his precious burden. The gods build a pyre in the courtyard of Asgard and, just as Loki has stopped short of it, kindle it. Unable to halt his frenzied onrush, Þjazi plunges headlong through the fire, falling to the ground with his feathers aflame, whereupon the gods attack and kill him.
A number of theories surround Iðunn, including potential links to fertility, and her potential origin in Proto-Indo-European religion. Long the subject of artworks, Iðunn is sometimes referenced in modern popular culture.
The name Iðunn has been variously explained as meaning "ever young", "rejuvenator", or "the rejuvenating one". As the modern English alphabet lacks the eth (ð) character, Iðunn is sometimes anglicized as Idun, Idunn or Ithun. An -a suffix is sometimes applied to denote femininity, resulting in forms such as Iduna and Idunna.
The name Iðunn appears as a personal name in several historical sources and the Landnámabók records that it has been in use in Iceland as a personal name since the pagan period (10th century). Landnámabók records two incidents of women by the name of Iðunn; Iðunn Arnardóttir, the daughter of an early settler, and Iðunn Molda-Gnúpsdóttir, granddaughter of one of the earliest settlers recorded in the book. The name Iðunn has been theorized as the origin of the Old English name Idonea. 19th century author Charlotte Mary Yonge writes that the derivation of Idonea from Idunn is "almost certain," noting that although Idonea may be "the feminine of the Latin idoneus (fit), its absence in the Romance countries may be taken as an indication that it was a mere classicising of the northern goddess of the apples of youth."
19th-century scholar Jacob Grimm proposed a potential etymological connection to the idisi. Grimm states that "with the original form idis the goddess Idunn may possibly be connected." Grimm further states that Iðunn may have been known with another name, and that "Iðunn would seem by Saem. 89a to be an Elvish word, but we do not hear of any other name for the goddess."
Iðunn appears in the Poetic Edda poem Lokasenna and, included in some modern editions of the Poetic Edda, in the late poem Hrafnagaldr Óðins.
Iðunn is introduced as Bragi's wife in the prose introduction to the poem Lokasenna, where the two attend a feast held by Ægir. In stanzas 16, 17, and 18, dialog occurs between Loki and Iðunn after Loki has insulted Bragi. In stanza 16, Iðunn (here anglicized as Idunn) says:
In this exchange, Loki has accused Iðunn of having slept with the killer of her brother. However, neither this brother nor killer are accounted for in any other surviving source. Afterward, the goddess Gefjon speaks up and the poem continues in turn.
In the poem Hrafnagaldr Óðins, additional information is given about Iðunn, though this information is otherwise unattested. Here, Iðunn is identified as descending from elves, as one of "Ivaldi's elder children" and as a dís who dwells in dales. Stanza 6 reads:
Iðunn is introduced in the Prose Edda in section 26 of the Prose Edda book Gylfaginning. Here, Iðunn is described as Bragi's wife and keeper of an eski (a wooden box made of ash wood and often used for carrying personal possessions) within which she keeps apples. The apples are bitten into by the gods when they begin to grow old and they then become young again, which is described as occurring up until Ragnarök. Gangleri (described as King Gylfi in disguise) states that it seems to him that the gods depend greatly upon Iðunn's good faith and care. With a laugh, High responds that misfortune once came close, that he could tell Gangleri about it, but first he must hear the names of more of the Æsir, and he continues providing information about gods.
In the book Skáldskaparmál, Idunn is mentioned in its first chapter (numbered as 55) as one of eight ásynjur (goddesses) sitting in their thrones at a banquet in Asgard for Ægir. In chapter 56, Bragi tells Ægir about Iðunn's abduction by the jötunn Þjazi. Bragi says that after hitting an eagle (Þjazi in disguise) with a pole, Loki finds himself stuck to the bird. Loki is pulled further and further into the sky, his feet banging against stones, gravel, and trees until, fearful that his arms will be pulled from their sockets, he roars for mercy, begging the eagle to set him free, to which the eagle agrees, but only on condition that Loki make a solemn vow to lure Iðunn, bearing her apples of youth, from the safety of Asgard. Loki accepts Þjazi's conditions and returns to his friends Odin and Hœnir. At the time agreed upon by Loki and Þjazi, Loki lures Iðunn out of Asgard into "a certain forest", telling her that he has discovered some apples that she would find worth keeping, and furthermore that she should bring her own apples with her so that she may compare them with the apples he has discovered. Þjazi arrives in eagle shape, snatches Iðunn, flies away with her and takes her to his home, Þrymheimr.
The Æsir begin to grow grey and old at the disappearance of Idunn. The Æsir assemble at a thing where they ask one another when Iðunn had been seen last. The Æsir realize that the last time that Iðunn was seen was when she was going outside of Asgard with Loki, and so they have Loki arrested and brought to the thing. Loki is threatened with death and torture. Terrified, Loki says that if the goddess Freyja will lend him her "falcon shape" he will search for Iðunn in the land of Jötunheimr. Freyja lends the falcon shape to Loki, and with it he flies north to Jötunheimr. One day later, Loki arrives at Þjazi's home. There he discovers that Þjazi is out at sea in a boat, having left Iðunn at home alone. Loki transforms the goddess into a nut, grasps her in his claws, and flies away with her as fast as possible.
Þjazi, arriving home to discover Iðunn gone, resumes his eagle shape and flies off in pursuit of Loki, his mighty wings stirring up a storm as he does so. The Æsir, seeing a falcon flying with a nut clutched in its claws and hotly pursued by an eagle, make haste to pile up a great heap of wood shavings and set it alight. The falcon flies over the battlements of Asgard and drops down behind the wall. The eagle, however, overshoots the falcon and, unable to stop, plunges through the fire, setting light to his feathers and falling to the ground within the gates of Asgard, whereat the Æsir set upon the jötunn and kill him, leading the narrator to comment "and this killing is greatly renowned."
In chapter 10, "husband of Iðunn" is given as a means of referring to Bragi. In chapter 86, means of referring to Iðunn are given: "wife of Bragi", "keeper of the apples", and her apples "the Æsir's age old cure". Additionally, in connection to the story of her abduction by Þjazi, she may be referred to as "Þjazi's booty". A passage of the 10th-century poem Haustlöng where the skald Þjóðólfr of Hvinir gives a lengthy description of a richly detailed shield he has received that features a depiction of the abduction of Iðunn. Within the cited portions of Haustlöng, Iðunn is referred to as "the maid who knew the Æsir's old age cure", "the gods' lady", "ale-Gefn", "the Æsir's girl-friend", and once by name.
In chapter 33, Iðunn is cited as one of the six ásynjur visiting Ægir. Iðunn appears a final time in the Prose Edda in chapter 75, where she appears in a list of ásynjur.
Some surviving stories regarding Iðunn focus on her youth-maintaining apples. English scholar Hilda Ellis Davidson links apples to religious practices in Germanic paganism. She points out that buckets of apples were found in the 9th-century Oseberg ship burial site in Norway and that fruit and nuts (Iðunn having been described as being transformed into a nut in Skáldskaparmál) have been found in the early graves of the Germanic peoples in England and elsewhere on the continent of Europe which may have had a symbolic meaning and also that nuts are still a recognized symbol of fertility in Southwest England.
Davidson notes a connection between apples and the Vanir, a group of gods associated with fertility in Norse mythology, citing an instance of eleven "golden apples" being given to woo the beautiful Gerðr by Skírnir, who was acting as messenger for the major Vanir god Freyr in stanzas 19 and 20 of Skírnismál. In Skírnismál, Gerðr mentions her brother's slayer in stanza 16, which Davidson states has led to some suggestions that Gerðr may have been connected to Iðunn as they are similar in this way. Davidson also notes a further connection between fertility and apples in Norse mythology; in chapter 2 of the Völsunga saga when the major goddess Frigg sends King Rerir an apple after he prays to Odin for a child, Frigg's messenger (in the guise of a crow) drops the apple in his lap as he sits atop a mound. Rerir's wife's consumption of the apple results in a six-year pregnancy and the caesarean section birth of their son—the hero Völsung.
Davidson points out the "strange" phrase "apples of Hel" used in an 11th-century poem by the skald Þórbjörn Brúnason. Davidson states this may imply that the apple was thought of by the skald as the food of the dead. Further, Davidson notes that the potentially Germanic goddess Nehalennia is sometimes depicted with apples and parallels exist in early Irish stories. Davidson asserts that while cultivation of the apple in Northern Europe extends back to at least the time of the Roman Empire and came to Europe from the Near East, the native varieties of apple trees growing in Northern Europe are small and bitter. Davidson concludes that in the figure of Iðunn "we must have a dim reflection of an old symbol: that of the guardian goddess of the life-giving fruit of the other world."
David Knipe theorizes Iðunn's abduction by Thjazi in eagle form as an example of the Indo-European motif "of an eagle who steals the celestial means of immortality." In addition, Knipe says that "a parallel to the theft of Iðunn's apples (symbols of fertility) has been noted in the Celtic myth where Brian, Iuchar, and Icharba, the sons of Tuirenn, assume the guise of hawks in order to steal sacred apples from the garden of Hisberna. Here, too, there is pursuit, the guardians being female griffins."
John Lindow theorizes that the possible etymological meaning of Iðunn—"ever young"—would potentially allow Iðunn to perform her ability to provide eternal youthfulness to the gods without her apples, and further states that Haustlöng does not mention apples but rather refers to Iðunn as the "maiden who understood the eternal life of the Æsir." Lindow further theorizes that Iðunn's abduction is "one of the most dangerous moments" for the gods, as the general movement of female jötnar to the gods would be reversed.
Regarding the accusations levelled towards Iðunn by Loki, Lee Hollander opines that Lokasenna was intended to be humorous and that the accusations thrown by Loki in the poem are not necessarily to be taken as "generally accepted lore" at the time it was composed. Rather they are charges that are easy for Loki to make and difficult for his targets to disprove, or which they do not care to refute.
In his study of the skaldic poem Haustlöng, Richard North comments that "[Iðunn] is probably to be understood as an aspect of Freyja, a goddess whom the gods rely on for their youth and beauty [...]". Supporting this contention is the fact that she is absent from the listing of goddesses in the Prose Edda's Gylfaginning despite her significance.
Iðunn has been the subject of a number of artistic depictions. These depictions include "Idun" (statue, 1821) by H. E. Freund, "Idun" (statue, 1843) and "Idun som bortrövas av jätten Tjasse i örnhamn" (plaster statue, 1856) by C. G. Qvarnström, "Brage sittande vid harpan, Idun stående bakom honom" (1846) by Nils Blommér, "Iduns Rückkehr nach Valhalla" by C. Hansen (resulting in an 1862 woodcut modeled on the painting by C. Hammer), "Bragi und Idun, Balder und Nanna" (drawing, 1882) by K. Ehrenberg, "Idun and the Apples" (1890) by J. Doyle Penrose, "Brita as Iduna" (1901) by Carl Larsson, "Loki och Idun" (1911) by John Bauer, "Idun" (watercolor, 1905) by B. E. Ward, and "Idun" (1901) by E. Doepler.
The 19th-century composer Richard Wagner's Der Ring des Nibelungen opera cycle features Freia, a version of the goddess Freyja combined with the Iðunn.
Idunn Mons, a mons of the planet Venus, is named after Iðunn. The publication of the United States-based Germanic neopagan group The Troth (Idunna, edited by Diana L. Paxson) derives its name from that of the goddess. The Swedish magazine Idun was named after the goddess; she appears with her basket of apples on its banner.
In Fire Emblem: The Binding Blade, the sixth instalment of the tactical RPG series, the final boss is the corrupted divine dragon, Idunn. She was able to produce a high amount of dragons very quickly, despite their slow rate of reproduction, likely a nod to Iddun's role as a symbol of fertility.
In the 2018 God of War, Apples of Iðunn act as a collectable item to assist the player, though the goddess herself does not physically appear.
In the episode 16 of season 6 of the Vikings TV Series, Iðunn is portrayed by English actress Jerry-Jane Pears. | [
{
"paragraph_id": 0,
"text": "In Norse mythology, Iðunn is a goddess associated with apples and youth. Iðunn is attested in the Poetic Edda, compiled in the 13th century from earlier traditional sources, and the Prose Edda, written in the 13th century by Snorri Sturluson. In both sources, she is described as the wife of the skaldic god Bragi, and in the Prose Edda, also as a keeper of apples and granter of eternal youthfulness.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Prose Edda relates how Loki was once forced by the jötunn Þjazi to lure Iðunn out of Asgard and into a wood with the promise of apples even fairer than her own. Þjazi, in the form of an eagle, abducts Iðunn from the wood, bearing her off to his home. Iðunn's absence causes the gods to grow old and grey, and they realize that Loki is responsible for her disappearance. Under duress, Loki promises to bring her back and, setting out in the form of a falcon, eventually finds her alone at Þjazi's home. He turns her into a nut and flies back toward Asgard. When Þjazi returns to find Iðunn gone, he assumes his eagle form once more and flies off in hot pursuit of Loki and his precious burden. The gods build a pyre in the courtyard of Asgard and, just as Loki has stopped short of it, kindle it. Unable to halt his frenzied onrush, Þjazi plunges headlong through the fire, falling to the ground with his feathers aflame, whereupon the gods attack and kill him.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A number of theories surround Iðunn, including potential links to fertility, and her potential origin in Proto-Indo-European religion. Long the subject of artworks, Iðunn is sometimes referenced in modern popular culture.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The name Iðunn has been variously explained as meaning \"ever young\", \"rejuvenator\", or \"the rejuvenating one\". As the modern English alphabet lacks the eth (ð) character, Iðunn is sometimes anglicized as Idun, Idunn or Ithun. An -a suffix is sometimes applied to denote femininity, resulting in forms such as Iduna and Idunna.",
"title": "Name"
},
{
"paragraph_id": 4,
"text": "The name Iðunn appears as a personal name in several historical sources and the Landnámabók records that it has been in use in Iceland as a personal name since the pagan period (10th century). Landnámabók records two incidents of women by the name of Iðunn; Iðunn Arnardóttir, the daughter of an early settler, and Iðunn Molda-Gnúpsdóttir, granddaughter of one of the earliest settlers recorded in the book. The name Iðunn has been theorized as the origin of the Old English name Idonea. 19th century author Charlotte Mary Yonge writes that the derivation of Idonea from Idunn is \"almost certain,\" noting that although Idonea may be \"the feminine of the Latin idoneus (fit), its absence in the Romance countries may be taken as an indication that it was a mere classicising of the northern goddess of the apples of youth.\"",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "19th-century scholar Jacob Grimm proposed a potential etymological connection to the idisi. Grimm states that \"with the original form idis the goddess Idunn may possibly be connected.\" Grimm further states that Iðunn may have been known with another name, and that \"Iðunn would seem by Saem. 89a to be an Elvish word, but we do not hear of any other name for the goddess.\"",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "Iðunn appears in the Poetic Edda poem Lokasenna and, included in some modern editions of the Poetic Edda, in the late poem Hrafnagaldr Óðins.",
"title": "Attestations"
},
{
"paragraph_id": 7,
"text": "Iðunn is introduced as Bragi's wife in the prose introduction to the poem Lokasenna, where the two attend a feast held by Ægir. In stanzas 16, 17, and 18, dialog occurs between Loki and Iðunn after Loki has insulted Bragi. In stanza 16, Iðunn (here anglicized as Idunn) says:",
"title": "Attestations"
},
{
"paragraph_id": 8,
"text": "In this exchange, Loki has accused Iðunn of having slept with the killer of her brother. However, neither this brother nor killer are accounted for in any other surviving source. Afterward, the goddess Gefjon speaks up and the poem continues in turn.",
"title": "Attestations"
},
{
"paragraph_id": 9,
"text": "In the poem Hrafnagaldr Óðins, additional information is given about Iðunn, though this information is otherwise unattested. Here, Iðunn is identified as descending from elves, as one of \"Ivaldi's elder children\" and as a dís who dwells in dales. Stanza 6 reads:",
"title": "Attestations"
},
{
"paragraph_id": 10,
"text": "Iðunn is introduced in the Prose Edda in section 26 of the Prose Edda book Gylfaginning. Here, Iðunn is described as Bragi's wife and keeper of an eski (a wooden box made of ash wood and often used for carrying personal possessions) within which she keeps apples. The apples are bitten into by the gods when they begin to grow old and they then become young again, which is described as occurring up until Ragnarök. Gangleri (described as King Gylfi in disguise) states that it seems to him that the gods depend greatly upon Iðunn's good faith and care. With a laugh, High responds that misfortune once came close, that he could tell Gangleri about it, but first he must hear the names of more of the Æsir, and he continues providing information about gods.",
"title": "Attestations"
},
{
"paragraph_id": 11,
"text": "In the book Skáldskaparmál, Idunn is mentioned in its first chapter (numbered as 55) as one of eight ásynjur (goddesses) sitting in their thrones at a banquet in Asgard for Ægir. In chapter 56, Bragi tells Ægir about Iðunn's abduction by the jötunn Þjazi. Bragi says that after hitting an eagle (Þjazi in disguise) with a pole, Loki finds himself stuck to the bird. Loki is pulled further and further into the sky, his feet banging against stones, gravel, and trees until, fearful that his arms will be pulled from their sockets, he roars for mercy, begging the eagle to set him free, to which the eagle agrees, but only on condition that Loki make a solemn vow to lure Iðunn, bearing her apples of youth, from the safety of Asgard. Loki accepts Þjazi's conditions and returns to his friends Odin and Hœnir. At the time agreed upon by Loki and Þjazi, Loki lures Iðunn out of Asgard into \"a certain forest\", telling her that he has discovered some apples that she would find worth keeping, and furthermore that she should bring her own apples with her so that she may compare them with the apples he has discovered. Þjazi arrives in eagle shape, snatches Iðunn, flies away with her and takes her to his home, Þrymheimr.",
"title": "Attestations"
},
{
"paragraph_id": 12,
"text": "The Æsir begin to grow grey and old at the disappearance of Idunn. The Æsir assemble at a thing where they ask one another when Iðunn had been seen last. The Æsir realize that the last time that Iðunn was seen was when she was going outside of Asgard with Loki, and so they have Loki arrested and brought to the thing. Loki is threatened with death and torture. Terrified, Loki says that if the goddess Freyja will lend him her \"falcon shape\" he will search for Iðunn in the land of Jötunheimr. Freyja lends the falcon shape to Loki, and with it he flies north to Jötunheimr. One day later, Loki arrives at Þjazi's home. There he discovers that Þjazi is out at sea in a boat, having left Iðunn at home alone. Loki transforms the goddess into a nut, grasps her in his claws, and flies away with her as fast as possible.",
"title": "Attestations"
},
{
"paragraph_id": 13,
"text": "Þjazi, arriving home to discover Iðunn gone, resumes his eagle shape and flies off in pursuit of Loki, his mighty wings stirring up a storm as he does so. The Æsir, seeing a falcon flying with a nut clutched in its claws and hotly pursued by an eagle, make haste to pile up a great heap of wood shavings and set it alight. The falcon flies over the battlements of Asgard and drops down behind the wall. The eagle, however, overshoots the falcon and, unable to stop, plunges through the fire, setting light to his feathers and falling to the ground within the gates of Asgard, whereat the Æsir set upon the jötunn and kill him, leading the narrator to comment \"and this killing is greatly renowned.\"",
"title": "Attestations"
},
{
"paragraph_id": 14,
"text": "In chapter 10, \"husband of Iðunn\" is given as a means of referring to Bragi. In chapter 86, means of referring to Iðunn are given: \"wife of Bragi\", \"keeper of the apples\", and her apples \"the Æsir's age old cure\". Additionally, in connection to the story of her abduction by Þjazi, she may be referred to as \"Þjazi's booty\". A passage of the 10th-century poem Haustlöng where the skald Þjóðólfr of Hvinir gives a lengthy description of a richly detailed shield he has received that features a depiction of the abduction of Iðunn. Within the cited portions of Haustlöng, Iðunn is referred to as \"the maid who knew the Æsir's old age cure\", \"the gods' lady\", \"ale-Gefn\", \"the Æsir's girl-friend\", and once by name.",
"title": "Attestations"
},
{
"paragraph_id": 15,
"text": "In chapter 33, Iðunn is cited as one of the six ásynjur visiting Ægir. Iðunn appears a final time in the Prose Edda in chapter 75, where she appears in a list of ásynjur.",
"title": "Attestations"
},
{
"paragraph_id": 16,
"text": "Some surviving stories regarding Iðunn focus on her youth-maintaining apples. English scholar Hilda Ellis Davidson links apples to religious practices in Germanic paganism. She points out that buckets of apples were found in the 9th-century Oseberg ship burial site in Norway and that fruit and nuts (Iðunn having been described as being transformed into a nut in Skáldskaparmál) have been found in the early graves of the Germanic peoples in England and elsewhere on the continent of Europe which may have had a symbolic meaning and also that nuts are still a recognized symbol of fertility in Southwest England.",
"title": "Theories"
},
{
"paragraph_id": 17,
"text": "Davidson notes a connection between apples and the Vanir, a group of gods associated with fertility in Norse mythology, citing an instance of eleven \"golden apples\" being given to woo the beautiful Gerðr by Skírnir, who was acting as messenger for the major Vanir god Freyr in stanzas 19 and 20 of Skírnismál. In Skírnismál, Gerðr mentions her brother's slayer in stanza 16, which Davidson states has led to some suggestions that Gerðr may have been connected to Iðunn as they are similar in this way. Davidson also notes a further connection between fertility and apples in Norse mythology; in chapter 2 of the Völsunga saga when the major goddess Frigg sends King Rerir an apple after he prays to Odin for a child, Frigg's messenger (in the guise of a crow) drops the apple in his lap as he sits atop a mound. Rerir's wife's consumption of the apple results in a six-year pregnancy and the caesarean section birth of their son—the hero Völsung.",
"title": "Theories"
},
{
"paragraph_id": 18,
"text": "Davidson points out the \"strange\" phrase \"apples of Hel\" used in an 11th-century poem by the skald Þórbjörn Brúnason. Davidson states this may imply that the apple was thought of by the skald as the food of the dead. Further, Davidson notes that the potentially Germanic goddess Nehalennia is sometimes depicted with apples and parallels exist in early Irish stories. Davidson asserts that while cultivation of the apple in Northern Europe extends back to at least the time of the Roman Empire and came to Europe from the Near East, the native varieties of apple trees growing in Northern Europe are small and bitter. Davidson concludes that in the figure of Iðunn \"we must have a dim reflection of an old symbol: that of the guardian goddess of the life-giving fruit of the other world.\"",
"title": "Theories"
},
{
"paragraph_id": 19,
"text": "David Knipe theorizes Iðunn's abduction by Thjazi in eagle form as an example of the Indo-European motif \"of an eagle who steals the celestial means of immortality.\" In addition, Knipe says that \"a parallel to the theft of Iðunn's apples (symbols of fertility) has been noted in the Celtic myth where Brian, Iuchar, and Icharba, the sons of Tuirenn, assume the guise of hawks in order to steal sacred apples from the garden of Hisberna. Here, too, there is pursuit, the guardians being female griffins.\"",
"title": "Theories"
},
{
"paragraph_id": 20,
"text": "John Lindow theorizes that the possible etymological meaning of Iðunn—\"ever young\"—would potentially allow Iðunn to perform her ability to provide eternal youthfulness to the gods without her apples, and further states that Haustlöng does not mention apples but rather refers to Iðunn as the \"maiden who understood the eternal life of the Æsir.\" Lindow further theorizes that Iðunn's abduction is \"one of the most dangerous moments\" for the gods, as the general movement of female jötnar to the gods would be reversed.",
"title": "Theories"
},
{
"paragraph_id": 21,
"text": "Regarding the accusations levelled towards Iðunn by Loki, Lee Hollander opines that Lokasenna was intended to be humorous and that the accusations thrown by Loki in the poem are not necessarily to be taken as \"generally accepted lore\" at the time it was composed. Rather they are charges that are easy for Loki to make and difficult for his targets to disprove, or which they do not care to refute.",
"title": "Theories"
},
{
"paragraph_id": 22,
"text": "In his study of the skaldic poem Haustlöng, Richard North comments that \"[Iðunn] is probably to be understood as an aspect of Freyja, a goddess whom the gods rely on for their youth and beauty [...]\". Supporting this contention is the fact that she is absent from the listing of goddesses in the Prose Edda's Gylfaginning despite her significance.",
"title": "Theories"
},
{
"paragraph_id": 23,
"text": "Iðunn has been the subject of a number of artistic depictions. These depictions include \"Idun\" (statue, 1821) by H. E. Freund, \"Idun\" (statue, 1843) and \"Idun som bortrövas av jätten Tjasse i örnhamn\" (plaster statue, 1856) by C. G. Qvarnström, \"Brage sittande vid harpan, Idun stående bakom honom\" (1846) by Nils Blommér, \"Iduns Rückkehr nach Valhalla\" by C. Hansen (resulting in an 1862 woodcut modeled on the painting by C. Hammer), \"Bragi und Idun, Balder und Nanna\" (drawing, 1882) by K. Ehrenberg, \"Idun and the Apples\" (1890) by J. Doyle Penrose, \"Brita as Iduna\" (1901) by Carl Larsson, \"Loki och Idun\" (1911) by John Bauer, \"Idun\" (watercolor, 1905) by B. E. Ward, and \"Idun\" (1901) by E. Doepler.",
"title": "Modern influence"
},
{
"paragraph_id": 24,
"text": "The 19th-century composer Richard Wagner's Der Ring des Nibelungen opera cycle features Freia, a version of the goddess Freyja combined with the Iðunn.",
"title": "Modern influence"
},
{
"paragraph_id": 25,
"text": "Idunn Mons, a mons of the planet Venus, is named after Iðunn. The publication of the United States-based Germanic neopagan group The Troth (Idunna, edited by Diana L. Paxson) derives its name from that of the goddess. The Swedish magazine Idun was named after the goddess; she appears with her basket of apples on its banner.",
"title": "Modern influence"
},
{
"paragraph_id": 26,
"text": "In Fire Emblem: The Binding Blade, the sixth instalment of the tactical RPG series, the final boss is the corrupted divine dragon, Idunn. She was able to produce a high amount of dragons very quickly, despite their slow rate of reproduction, likely a nod to Iddun's role as a symbol of fertility.",
"title": "Modern influence"
},
{
"paragraph_id": 27,
"text": "In the 2018 God of War, Apples of Iðunn act as a collectable item to assist the player, though the goddess herself does not physically appear.",
"title": "Modern influence"
},
{
"paragraph_id": 28,
"text": "In the episode 16 of season 6 of the Vikings TV Series, Iðunn is portrayed by English actress Jerry-Jane Pears.",
"title": "Modern influence"
},
{
"paragraph_id": 29,
"text": "",
"title": "References"
}
]
| In Norse mythology, Iðunn is a goddess associated with apples and youth. Iðunn is attested in the Poetic Edda, compiled in the 13th century from earlier traditional sources, and the Prose Edda, written in the 13th century by Snorri Sturluson. In both sources, she is described as the wife of the skaldic god Bragi, and in the Prose Edda, also as a keeper of apples and granter of eternal youthfulness. The Prose Edda relates how Loki was once forced by the jötunn Þjazi to lure Iðunn out of Asgard and into a wood with the promise of apples even fairer than her own. Þjazi, in the form of an eagle, abducts Iðunn from the wood, bearing her off to his home. Iðunn's absence causes the gods to grow old and grey, and they realize that Loki is responsible for her disappearance. Under duress, Loki promises to bring her back and, setting out in the form of a falcon, eventually finds her alone at Þjazi's home. He turns her into a nut and flies back toward Asgard. When Þjazi returns to find Iðunn gone, he assumes his eagle form once more and flies off in hot pursuit of Loki and his precious burden. The gods build a pyre in the courtyard of Asgard and, just as Loki has stopped short of it, kindle it. Unable to halt his frenzied onrush, Þjazi plunges headlong through the fire, falling to the ground with his feathers aflame, whereupon the gods attack and kill him. A number of theories surround Iðunn, including potential links to fertility, and her potential origin in Proto-Indo-European religion. Long the subject of artworks, Iðunn is sometimes referenced in modern popular culture. | 2020-12-16T12:21:52Z | [
"Template:Authority control",
"Template:Good article",
"Template:Short description",
"Template:Redirect",
"Template:Reflist",
"Template:Cite book",
"Template:Refend",
"Template:Commons category",
"Template:Refbegin",
"Template:ISBN",
"Template:Norse mythology"
]
| https://en.wikipedia.org/wiki/Iduna |
|
15,198 | Indic | Indic may refer to: | [
{
"paragraph_id": 0,
"text": "Indic may refer to:",
"title": ""
}
]
| Indic may refer to: Indic languages (disambiguation)
Various scripts:
Brahmic scripts, a family of scripts used to write Indian and other Asian languages
Kharosthi (extinct)
Indian numerals
Indian religions, also known as the Dharmic religions
Other things related to the Indian subcontinent | 2022-05-30T06:41:26Z | [
"Template:Wiktionary",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Indic |
|
15,199 | Papua (province) | Papua is a province of Indonesia, comprising the northern coast of Western New Guinea together with island groups in Cenderawasih Bay to the west. It roughly follows the borders of Papuan customary region of Tabi Saireri. It is bordered by the sovereign state of Papua New Guinea to the east, the Pacific Ocean to the north, Cenderawasih Bay to the west, and the provinces of Central Papua and Highland Papua to the south. The province also shares maritime boundaries with Palau in the Pacific. Following the splitting off of twenty regencies to create the three new provinces of Central Papua, Highland Papua, and South Papua on 30 June 2022, the residual province is divided into eight regencies (kabupaten) and one city (kota), the latter being the provincial capital of Jayapura. The province has a large potential in natural resources, such as gold, nickel, petroleum, etc. Papua, along with five other Papuan provinces, has a higher degree of autonomy level compared to other Indonesian provinces.
The island of New Guinea has been populated for tens of thousands of years. European traders began frequenting the region around the late 16th century due to spice trade. In the end, the Dutch Empire emerged as the dominant leader in the spice war, annexing the western part of New Guinea into the colony of Dutch East Indies. The Dutch remained in New Guinea until 1962, even though other parts of the former colony has declared independence as the Republic of Indonesia in 1945. Following negotiations and conflicts with the Indonesian government, the Dutch transferred Western New Guinea to a United Nations Temporary Executive Authority (UNTEA), which was again transferred to Indonesia after the controversial Act of Free Choice. The province was formerly called Irian Jaya and comprised the entire Western New Guinea until the inauguration of the province of West Papua (then West Irian Jaya) in 2001. In 2002, Papua adopted its current name and was granted a special autonomous status under Indonesian legislation.
The province of Papua remains one of the least developed provinces in Indonesia. As of 2020, Papua has a GDP per capita of Rp 56.1 million (US$ 3,970), ranking 11th place among all Indonesian provinces. However, Papua only has a Human Development Index of 0.604, the lowest among all Indonesian provinces. The harsh New Guinean terrain and climate is one of the main reasons why infrastructure in Papua is considered to be the most challenging to be developed among other Indonesian regions.
The 2020 census revealed a population of 4,303,707, of which the majority were Christian. The official estimate for mid 2022 was 4,418,581 prior to the division of the province into four separate provinces. The official estimate of the population in mid 2022 of the reduced province was 1,034,956. The interior is predominantly populated by ethnic Papuans while coastal towns are inhabited by descendants of intermarriages between Papuans, Melanesians and Austronesians, including other Indonesian ethnic groups. Migrants from the rest of Indonesia also tend to inhabit the coastal regions. The province is also home to some uncontacted peoples.
Dutch East India Company 1640s–1799 Dutch East Indies 1800–1942; 1944–1949 Empire of Japan 1942–1944 Dutch New Guinea 1949–1962 UNTEA 1962–1963 Indonesia 1963–present
There are several theories regarding the origin of the word Papua. One theory is that the name comes from the word 'Papo Ua', named by the Tidore Sultanate, which in the Tidore language means "not joining" or "not being united", meaning that there was no king who rules the area. Before the age of colonization, the Tidore Sultanate controlled some parts of the Bird's Head Peninsula in what is now the provinces of West Papua and Southwest Papua before expanding to also include coastal regions in the current province of Papua. This relationship plays an important historical role in binding the archipelagic civilizations of Indonesia to the Papuan world. Another theory is that the word Papua comes from the Malay word 'papuwah', which means 'frizzled hair'. It was first mentioned in the 1812 Malay Dictionary by William Marsden, although it was not found in earlier dictionaries. In the records of 16th century Portuguese and Spanish sailors, the word 'Papua' is the designation for the inhabitants of the Raja Ampat Islands and the coastal parts of the Bird's Head Peninsula. The former name of the province, Irian Jaya, was suggested during a tribal committee meeting in Tobati, Jayapura, formed by Atmoprasojo, head of the bestuur school in the 1940s. Frans Kaisiepo, the committee leader suggested the name from Mansren Koreri myths, Iri-an from the Biak language of Biak Island, meaning "hot land" referring to the local hot climate, but also from Iryan which means heated process as a metaphor for a land that is entering a new era. In Serui Iri-an (lit. land-nation) means "pillar of nation", while in Merauke Iri-an (lit. placed higher-nation) means "rising spirit" or "to rise". The name was promoted in 1945 by Marcus Kaisiepo, brother of the future governor Frans Kaisiepo. The name Irian was politicized later by Marthin Indey and Silas Papare with the Indonesian acronym 'Ikut Republik Indonesia Anti Nederland' (Join the Republic of Indonesia oppose the Netherlands). The name was used throughout the Suharto administration, until it was changed to Papua during the administration of President Abdurrahman Wahid.
The Dutch, who arrived later under Jacob Le Maire and Willem Schouten, called it Schouten island. They later used this name only to refer to islands off the north coast of Papua proper, the Schouten Islands or Biak Island. When the Dutch colonized this island as part of the Dutch East Indies, they called it Nieuw Guinea.
Speakers align themselves with a political orientation when choosing a name for the western half of the island of New Guinea. The official name of the region is "Papua" according to International Organization for Standardization (ISO). Independence activists refer to the region as "West Papua," while Indonesian officials have also used "West Papua" to name the westernmost province of the region since 2007. Historically, the region has had the official names of Netherlands New Guinea (1895–1962), West New Guinea or West Irian (1945–73), Irian Jaya (1973–2002), and Papua (2002–present).
Papuan habitation of the region is estimated to have begun between 42,000 and 48,000 years ago. Research indicates that the highlands were an early and independent center of agriculture, and show that agriculture developed gradually over several thousands of years; the banana has been cultivated in this region for at least 7,000 years. Austronesian peoples migrating through Maritime Southeast Asia settled in the area at least 3,000 years ago, and populated especially in Cenderawasih Bay. Diverse cultures and languages have developed in the island due to geographical isolation; there are over 300 languages and two hundred additional dialects in the region.
Ghau Yu Kuan, a Chinese merchant, came to Papua around the latter half of 500 AD and referred to it as Tungki, the area where they obtained spices. Meanwhile, in the latter half of 600 AD, the Sumatra-based empire of Srivijaya referred to the island as Janggi. The empire engaged in trade relations with western New Guinea, initially taking items like sandalwood and birds-of-paradise in tribute to China, but later making slaves out of the natives. It was only at the beginning of 700 AD that traders from Persia and Gujarat began to arrive in what is now Papua and called it Dwi Panta or Samudrananta, which means 'at edge of the ocean'.
The 14th-century Majapahit poem Nagarakretagama mentioned Wwanin or Onin and Sran as a recognized territory in the east, today identified as Onin peninsula in Fakfak Regency in the western part of the larger Bomberai Peninsula south of the Bird's Head Peninsula. At that time, Papua was said to be the eighth region of the Majapahit Empire. Wanin or Onin was one of the oldest indigenous names in recorded history to refer to the western part of the island of New Guinea. A transcript from the Nagarakretagama says the following:
According to some linguists, the word Ewanin is another name for Onin as recorded in old communal poems or songs from Wersar, while Sran popularly misunderstood to refers to Seram Island in Maluku, is more likely another name for a local Papuan kingdom which in its native language is called Sran Eman Muun, based in Kaimana and its furthest influence extends to the Kei Islands, in southeastern Maluku. In his book Nieuw Guinea, Dutch author WC. Klein explained the beginning of the influence of the Bacan Sultanate in Papua. There he wrote: In 1569 Papoese hoof den bezoeken Batjan. Ee aanterijken worden vermeld (In 1569, Papuan tribal leaders visited Bacan, which resulted in the creation of new kingdoms). According to the oral history of the Biak people, there used to be a relationship and marriage between their tribal chiefs and the sultans of Tidore in connection with Gurabesi, a naval leader of Waigeo from Biak. The Biak people is the largest Melanesian tribe, spread on the northern coast of Papua, making the Biak language widely used and considered the language of Papuan unity. Due to the relationship of the coastal areas of Papua with the Sultans of Maluku, there are several local kingdoms on this island, which shows the entry of feudalism.
Since the 16th century, apart from the Raja Ampat Islands which was contested between the Bacan Sultanate, Tidore Sultanate, and Ternate Sultanate, other coastal areas of Papua from the island of Biak to Mimika became vassals of the Tidore Sultanate. The Tidore Sultanate adheres to the trade pact and custom of Uli-Siwa (federation of nine), there were nine trade partners led by Tidore in opposition to the Ternate-led Uli Lima (federation of five). In administering its regions in Papua, Tidore divide them to three regions, Korano Ngaruha ( lit. Four Kings ) or Raja Ampat Islands, Papoua Gam Sio ( lit. Papua The Nine Negeri ) and Mafor Soa Raha ( lit. Mafor The Four Soa ). The role of these kingdoms began to decline due to the entry of traders from Europe to the archipelago marking the beginning of colonialism in the Indonesian Archipelago. During Tidore's rule, the main exports of the island during this period were resins, spices, slaves and the highly priced feathers of the bird-of-paradise. Sultan Nuku, one of the most famous Tidore sultans who rebelled against Dutch colonization, called himself "Sultan of Tidore and Papua", during his revolt in the 1780s. He commanded loyalty from both Moluccan and Papuan chiefs, especially those of Raja Ampat Islands. Following Tidore's defeat, much of the territory it claimed in western part of New Guinea came under Dutch rule as part of the Dutch East Indies.
In 1511, Antonio d'Arbau, a Portuguese sailor, called the Papua region as "Os Papuas" or llha de Papo. Don Jorge de Menetes, a sailor from Spain also stopped by in Papua a few years later (1526–1527), he refers to the region as 'Papua', which was mentioned in the diary of Antonio Pigafetta, the clerk for the Magellan voyage. The name Papua was known to Figafetta when he stopped on the island of Tidore. On 16 May 1545, Yñigo Ortiz de Retez, a Spanish maritime explorer in command of the San Juan de Letran, left port in Tidore, a Spanish stronghold in the Maluku Islands and going by way of the Talaud Islands and the Schoutens, reached the northern coast of New Guinea, which was coasted till the end of August when, owing to the 5°S latitude, contrary winds and currents, forcing a return to Tidore arriving on 5 October 1545. Many islands were encountered and first charted, along the northern coast of New Guinea, and in the Padaidos, Le Maires, Ninigos, Kaniets and Hermits, to some of which Spanish names were given. On 20 June 1545 at the mouth of the Mamberamo River (charted as San Agustin) he took possession of the land for the Spanish Crown, in the process giving the island the name by which it is known today. He called it Nueva Guinea owing to the resemblance of the local inhabitants to the peoples of the Guinea coast in West Africa. The first map showing the whole island as an island was published in 1600 and shown 1606, Luís Vaz de Torres explored the southern coast of New Guinea from Milne Bay to the Gulf of Papua including Orangerie Bay, which he named Bahía de San Lorenzo. His expedition also discovered Basilaki Island, naming it Tierra de San Buenaventura, which he claimed for Spain in July 1606. On 18 October, his expedition reached the western part of the island in present-day Indonesia, and also claimed the territory for the King of Spain.
In 1606, a Duyfken expedition led by the commander Wiliam Jansen from Holland landed in Papua. This expedition consisted of 3 ships, where they sailed from the north coast of Java and stopped at the Kei Islands, at the southwestern coast of Papua. With the increasing Dutch grip in the region, the Spanish left New Guinea in 1663. In 1660, the Dutch recognized the Sultan of Tidore's sovereignty over New Guinea. New Guinea thus became notionally Dutch as the Dutch held power over Tidore.
Dutch New Guinea in the early 19th century was administered from the Moluccas. Although the coast had been mapped in 1825 by Lieutenant Commander D.H. Kolff, there had been no serious effort to establish a permanent presence in Dutch New Guinea. The British, however, had shown considerable interest in the area, and were threatening to settle it. To prevent this, the Governor of the Moluccas, Pieter Merkus, urged the Dutch government to establish posts along the coast. An administrative and trading post established in 1828 on Triton Bay on the southwest coast of New Guinea. On 24 August 1828, the birthday of King William I of the Netherlands, the Dutch flag was hoisted and the Dutch claimed all of Western New Guinea, which they called Nieuw Guinea Several native chieftains proclaimed their loyalty to the Netherlands. The post was named Fort Du Bus for the then-Governor General of the Dutch East Indies, Leonard du Bus de Gisignies. 30 years later, Germans established the first missionary settlement on an island near Manokwari. While in 1828 the Dutch claimed the south coast west of the 141st meridian and the north coast west of Humboldt Bay in 1848, they did not try to develop the region again until 1896; they established settlements in Manokwari and Fak-Fak in response to perceived Australian ownership claims from the eastern half of New Guinea. Great Britain and Germany had recognized the Dutch claims in treaties of 1885 and 1895. At the same time, Britain claimed south-east New Guinea, later as the Territory of Papua, and Germany claimed the northeast, later known as the Territory of New Guinea. The German, Dutch and British colonial administrators each attempted to suppress the still-widespread practices of inter-village warfare and headhunting within their respective territories. In 1901, the Netherlands formally purchased West New Guinea from the Sultanate of Tidore, incorporating it into the Netherlands East Indies.
Dutch activity in the region remained in the first half of the twentieth century, notwithstanding the 1923 establishment of the Nieuw Guinea Beweging (New Guinea Movement) in the Netherlands by ultra right-wing supporters calling for Dutchmen to create a tropical Netherlands in Papua. This pre-war movement without full government support was largely unsuccessful in its drive, but did coincide with the development of a plan for Eurasian settlement of the Dutch Indies to establish Dutch farms in northern West New Guinea. This effort also failed as most returned to Java disillusioned, and by 1938 just 50 settlers remained near Hollandia and 258 in Manokwari. The Dutch established the Boven Digul camp in Tanahmerah, as a prison for Indonesian nationalists. Among those interned here were writer Marco Kartodikromo, Mohammad Hatta, who would become the first vice president of Indonesia, and Sutan Sjahrir, the first Indonesian Prime Minister.
Before about 1930, European maps showed the highlands as uninhabited forests. When first flown over by aircraft, numerous settlements with agricultural terraces and stockades were observed. The most startling discovery took place on 4 August 1938, when Richard Archbold discovered the Grand Valley of the Baliem River, which had 50,000 yet-undiscovered Stone Age farmers living in villages. The people, known as the Dani, were the last society of its size to make first contact with the rest of the world.
The region became important in World War II with the Pacific War upon the Netherlands' declaration of war on Japan after the bombing of Pearl Harbor. In 1942, the northern coast of West New Guinea and the nearby islands were occupied by Japan. By late 1942, most of the Netherlands Indies were occupied by Japan. Behind Japanese lines in New Guinea, Dutch guerrilla fighters resisted under Mauritz Christiaan Kokkelink. Allied forces drove out the Japanese after Operations Reckless and Persecution, with amphibious landings near Hollandia, from 21 April 1944. The area served as General Douglas MacArthur's headquarters until the conquest of the Philippines in March 1945. Over twenty U.S. bases were established and half a million US personnel moved through the area. West New Guinean farms supplied food for the half million US troops. Papuan men went into battle to carry the wounded, acted as guides and translators, and provided a range of services, from construction work and carpentry to serving as machine shop workers and mechanics. Following the end of the war, the Dutch retained possession of West New Guinea from 1945.
In 1944, Jan van Eechoud set up a school for bureaucrats in Hollandia (now Jayapura). One early headmaster of the school was Soegoro Atmoprasojo, an Indonesian nationalist graduate of Taman Siswa and former Boven-Digoel prisoners, in one of these meetings the name "Irian" was suggested. Many of these school early graduates would go on to found Indonesian independence movement in Western New Guinea, while some went on to support Dutch authorities and pursue Papuan independence. In December 1945, Atmoprasojo alongside his students were planning for a rebellion, however Dutch authorities would be alerted by a defecting member of Papuan Battalion on 14 December 1945, utilising forces from Rabaul, Dutch authorities would also capture 250 people possibly involved in this attack. The news of Indonesian independence proclamation arrived in New Guinea primarily through shipping laborers associated with Sea Transport Union of Indonesia (Sarpelindo), who were working for ships under the flag of Australian and the Dutch. This led to the formation of the Komite Indonesia Merdeka or KIM branch in Abepura, Hollandia in October 1946, originally an organization for Indonesian exiles in Sydney. It was led by Dr. J.A. Gerungan, a woman doctor who led an Abepura hospital, by December 1946, it came to be led by Martin Indey. KIM was one of the first Indonesian nationalist groups in New Guinea, whose members were mostly former associates of Soegoro. Simultaneously another separate Indonesian nationalist movement in New Guinea formed when Dr. Sam Ratulangi, was exiled at Serui, along with his six staff by the Netherlands Indies Civil Administration on 5 July 1946. In exile he met with Silas Papare who was also exiled from a failed Pagoncang Alam led rebellion to free Atmoprasojo, on 29 November 1946, an organization called Indonesian Irian Independence Party (PKII) was formed. A year later, on 17 August 1947, former students of Soegoro and others would held a red and white flag-raising ceremony to commemorate the Indonesian independence day.
KIM and PKII members began to start movements in other areas of New Guinea, most of these were unsuccessful, and the perpetrators were either imprisoned or killed. In Manokwari, a movement called Red and White Movement (GMP) was founded, which was led by Petrus Walebong and Samuel D. Kawab. This movement later spread to Babo, Kokas, Fakfak, and Sorong. In Biak, a local branch of KIM was joined with Perserikatan Indonesia Merdeka (PIM) which was formed earlier in September 1945 under the leadership of Lukas Rumkorem. Lukas would be captured and exiled to Hollandia, with the charge he instigated violence among local population accused of trying to kill Frans Kaisiepo and Marcus Kaisiepo. Still the movement did not disappear in Biak, Stevanus Yoseph together with Petero Jandi, Terianus Simbiak, Honokh Rambrar, Petrus Kaiwai and Hermanus Rumere on 19 March 1948, instigate another revolt. Dutch authorities had to send reinforcements from Jayapura. The Dutch imposed a harder penalty, with capital punishment for Petro Jandi, and a life sentence to Stevanus Yoseph. Meanwhile, another organization was formed on the 17 August 1947, called the Association of Young Men of Indonesia (PPI) under the leadership of Abraham Koromath.
Around the Bomberai Peninsula area of Fakfak, specifically in Kokas, an Indonesian nationalist movement was led by Machmud Singgirei Rumagesan. On 1 March 1946, he ordered that all the Dutch's flags in Kokas to be changed into Indonesian flags. He was later imprisoned in Doom Island, Sorong, where he managed to recruit some followers as well as the support from local Sangaji Malan Dutch authorities later aided by incoming troops from Sorong arrested the King Rumagesan and he was given capital punishment. Meanwhile, in Kaimana, King Muhammad Achmad Aituarauw founded an organization called Independence With Kaimana, West Irian (MBKIB), which similarly boycotted Dutch flags every 31 August. In response of this activity, Aituarauw was arrested by the Dutch and exiled to Ayamaru for 10 years in 1948. Other movements opposing the Dutch under local Papuan kings includes, New Guinea Islamic Union (KING) led by Ibrahim Bauw, King of Rumbati, Gerakan Pemuda Organisasi Muda led by Machmud Singgirei Rumagesan and Abbas Iha, and Persatuan Islam Kaimana (PIK) of Kaimana led by Usman Saad and King of Namatota, Umbair.
Following the Indonesian National Revolution, the Netherlands formally transferred sovereignty to the United States of Indonesia, on 27 December 1949. However, the Dutch refused to include Netherlands New Guinea in the new Indonesian Republic and took steps to prepare it for independence as a separate country. Following the failure of the Dutch and Indonesians to resolve their differences over West New Guinea during the Dutch-Indonesian Round Table Conference in late 1949, it was decided that the present status quo of the territory would be maintained and then negotiated bilaterally one year after the date of the transfer of sovereignty. However, both sides were still unable to resolve their differences in 1950, which led the Indonesian President Sukarno to accuse the Dutch of reneging on their promises to negotiate the handover of the territory. On 17 August 1950, Sukarno dissolved the United States of Indonesia and proclaimed the unitary Republic of Indonesia. Indonesia also began to initiate incursions to New Guinea in 1952, though most of these efforts would be unsuccessful. Most of these failed infiltrators would be sent to Boven-Digoel which would form clandestine intelligence groups working from the primarily southern part of New Guinea in preparation for war. Meanwhile, following the defeat of the third Afro-Asian resolution in November 1957, the Indonesian government embarked on a national campaign targeting Dutch interests in Indonesia; A total of 700 Dutch-owned companies with a valuation total of around $1.5 billion was nationalised. By January 1958, ten thousand Dutch nationals had left Indonesia, many returning to the Netherlands. By June 1960, around thirteen thousand Dutch nationals mostly Eurasians from New Guinea left for Australia, with around a thousand moving to the Netherlands. Following a sustained period of harassment against Dutch diplomatic representatives in Jakarta, the Indonesian government formally severed relations with the Netherlands in August 1960.
In response to Indonesian aggression, the Netherlands government stepped up its efforts to prepare the Papuan people for self-determination in 1959. These efforts culminated in the establishment of a hospital in Hollandia (modern–day Jayapura, currently Jayapura Regional General Hospital or RSUD Jayapura), a shipyard in Manokwari, agricultural research sites, plantations, and a military force known as the Papuan Volunteer Corps. By 1960, a legislative New Guinea Council had been established with a mixture of legislative, advisory and policy functions. Half of its members were to be elected, and elections for this council were held the following year. Most importantly, the Dutch also sought to create a sense of West Papuan national identity, and these efforts led to the creation of a national flag (the Morning Star flag), a national anthem, and a coat of arms. The Dutch had planned to transfer independence to West New Guinea in 1970.
Following the raising of the Papuan National Flag on 1 December 1961, tensions further escalated. Multiple rebellions erupted inside New Guinea against Dutch authorities, such as in Enarotali, Agats, Kokas, Merauke, Sorong and Baliem Valley. On 18 December 1961 Sukarno issued the Tri Komando Rakjat (People's Triple Command), calling the Indonesian people to defeat the formation of an independent state of West Papua, raise the Indonesian flag in the territory, and be ready for mobilisation at any time. In 1962 Indonesia launched a significant campaign of airborne and seaborne infiltrations against the disputed territory, beginning with a seaborne infiltration launched by Indonesian forces on 15 January 1962. The Indonesian attack was defeated by Dutch forces including the Dutch destroyers Evertsen and Kortenaer, the so-called Vlakke Hoek incident. Amongst the casualties was the Indonesian Deputy Chief of the Naval Staff; Commodore Yos Sudarso.
It finally was agreed through the New York Agreement in 1962 that the administration of Western New Guinea would be temporarily transferred from the Netherlands to Indonesia and that by 1969 the United Nations should oversee a referendum of the Papuan people, in which they would be given two options: to remain part of Indonesia or to become an independent nation. For a period of time, Dutch New Guinea were under the United Nations Temporary Executive Authority, before being transferred to Indonesia in 1963. A referendum was held in 1969, which was referred locally as Penantuan Pendapat Rakyat (Determination of the People's Opinion) or Act of Free Choice by independence activists. The referendum was recognized by the international community and the region became the Indonesian province of Irian Jaya. The province has been renamed as Papua since 2002.
Following the Act of Free Choice in 1969, Western New Guinea was formally integrated into the Republic of Indonesia. Instead of a referendum of the 816,000 Papuans, only 1,022 Papuan tribal representatives were allowed to vote, and they were coerced into voting in favor of integration. While several international observers including journalists and diplomats criticized the referendum as being rigged, the U.S. and Australia support Indonesia's efforts to secure acceptance in the United Nations for the pro-integration vote. That same year, 84 member states voted in favor for the United Nations to accept the result, with 30 others abstaining. Due to the Netherlands' efforts to promote a West Papuan national identity, a significant number of Papuans refused to accept the territory's integration into Indonesia. These formed the separatist Organisasi Papua Merdeka (Free Papua Movement) and have waged an insurgency against the Indonesian authorities, which continues to this day.
In January 2003 President Megawati Sukarnoputri signed an order dividing Papua into three provinces: Central Irian Jaya (Irian Jaya Tengah), Papua (or East Irian Jaya, Irian Jaya Timur), and West Papua (Irian Jaya Barat). The formality of installing a local government for Jakarta in Irian Jaya Barat (West) took place in February 2003 and a governor was appointed in November; a government for Irian Jaya Tengah (Central Irian Jaya) was delayed from August 2003 due to violent local protests. The creation of this separate Central Irian Jaya Province was blocked by Indonesian courts, who declared it to be unconstitutional and in contravention of the Papua's special autonomy agreement. The previous division into two provinces was allowed to stand as an established fact.
Following his election in 2014, Indonesian president, Joko Widodo, embarked on reforms intended to alleviate grievances of Native Papuans, such as stopping the transmigration program and starting massive infrastructure spending in Papua, including building Trans-Papua roads network. The Joko Widodo administration has prioritized infrastructure and human resource development as a great framework for solving the conflict in Papua. The administration has implemented a one-price fuel policy in Papua, with Jokowi assessing that it is a form of "justice" for all Papuans. The administration has also provided free primary and secondary education.
Security forces have been accused of abuses in the region including extrajudicial killings, torture, arrests of activists, and displacements of entire villages. On the other hand separatists have been accused and claimed much of the same violence, such as extrajudicial killings of both Papuan and non-Papuan civilians, torture, rapes, and attacking local villages. Protests against Indonesian rule in Papua happen frequently, the most recent being the 2019 Papua protests, one of the largest and most violent, which include burning of mostly non-Papuan civilians and Papuans that did not want to join the rally.
In July 2022, regencies in central and southern Papua were separated from the province, to be created into three new provinces: South Papua administered from Merauke Regency, Central Papua administered from Nabire Regency, and Highlands Papua administered from Jayawijaya Regency.
The province of Papua is governed by a directly elected governor and a regional legislature, People's Representative Council of Papua (Dewan Perwakilan Rakyat Papua, abbreviated as DPRP or DPR Papua). A unique government organization in the province is the Papuan People's Assembly (Majelis Rakyat Papua), which was formed by the Indonesian government in 2005, as mandated by the Papua Special Autonomy Law, as a coalition of Papuan tribal chiefs, Papuan religious leaders, and Papuan women representatives, tasked with arbitration and speaking on behalf of Papuan tribal customs.
Since 2014, the DPRP has 55 members who are elected through General elections every five years and 14 people who are appointed through the special autonomy, bringing the total number of DPRP members to 69 people. The DPRP leadership consists of 1 Chairperson and 3 Deputy Chairmen who come from political parties that have the most seats and votes. The current DPRP members are the results of the 2019 General Election which was sworn in on 31 October 2019 by the Chairperson of the Jayapura High Court at the Papua DPR Building. The composition of DPRP members for the 2019–2024 period consists of 13 political parties where the Nasdem Party is the political party with the most seats, with 8 seats, followed by the Democratic Party which also won 8 seats and the Indonesian Democratic Party of Struggle which won 7 seats.
The province of Papua is one of six provinces to have obtained special autonomy status, the others being Aceh, West Papua, Central Papua, Highland Papua and South Papua (the Special Regions of Jakarta and Yogyakarta have a similar province-level special status). According to Law 21/2001 on Special Autonomy Status (UU Nomor 21 Tahun 2001 tentang Otonomi khusus Papua), the provincial government of Papua is provided with authority within all sectors of administration, except for the five strategic areas of foreign affairs, security and defense, monetary and fiscal affairs, religion and justice. The provincial government is authorized to issue local regulations to further stipulate the implementation of the special autonomy, including regulating the authority of districts and municipalities within the province. Due to its special autonomy status, Papua province is provided with significant amount of special autonomy funds, which can be used to benefit its indigenous peoples. But the province has low fiscal capacity and it is highly dependent on unconditional transfers and the above-mentioned special autonomy fund, which accounted for about 55% of total revenues in 2008.
After obtaining its special autonomy status, to allow the local population access to timber production benefits, the Papuan provincial government issued a number of decrees, enabling:
As of 2022 (following the separation of Central Papua, Highland Papua, and South Papua province), the residual Papua Province consisted of 8 regencies (kabupaten) and one city (kota); on the map below, these regencies comprise the northern belt from Waropen Regency to Keerom Regency, plus the island groups to their northwest. Initially the area now forming the present Papua Province contained three regencies - Jayapura, Yapen Waropen and Biak Numfor. The City of Jayapura was separated on 2 August 1993 from Jayapura Regency and formed into a province-level administration. On 11 December 2002 three new regencies were created - Keerom and Sarmi from parts of Jayapura Regency, and Waropen from part of Yapen Waropen Regency (the rest of this regency was renamed as Yapen Islands). On 18 December 2003 a further regency - Supiori - was created from part of Biak Numfor Regency, and on 15 March 2007 a further regency - Mamberamo Raya - was created from the western part of Sarmi Regency. These regencies and the city are together subdivided as into districts (distrik), and thence into "villages" (kelurahan and desa). With the release of the Act Number 21 of 2001 concerning the Special Autonomous Region of Papua Province, the term distrik was used instead of kecamatan in the entire Western New Guinea. The difference between the two is merely the terminology, with kepala distrik being the district head.
The regencies (kabupaten) and the city (kota) are listed below with their areas and their populations at the 2020 census and subsequent official estimates for mid 2022, together with the 2020 Human Development Index of each administrative divisions.
The island of New Guinea lies to the east of the Malay Archipelago, with which it is sometimes included as part of a greater Indo-Australian Archipelago. Geologically it is a part of the same tectonic plate as Australia. When world sea levels were low, the two shared shorelines (which now lie 100 to 140 metres below sea level), and combined with lands now inundated into the tectonic continent of Sahul, also known as Greater Australia. The two landmasses became separated when the area now known as the Torres Strait flooded after the end of the Last Glacial Period.
The province of Papua is located between 2 ° 25'LU – 9 ° S and 130 ° – 141 ° East. The total area of Papua is now 82,680.95 km (31,923.29 sq. miles). Until its division in 2022 into four provinces, Papua Province was the province that had the largest area in Indonesia, with a total area of 312,816.35 km, or 19.33% of the total area of the Indonesian archipelago. The boundaries of Papua are: Pacific Ocean (North), Highland Papua (South), Central Papua (Southwest) and Papua New Guinea (East). Papua, like most parts of Indonesia, has two seasons, the dry season and the rainy season. From June to September the wind flows from Australia and does not contain much water vapor resulting in a dry season. On the other hand, from December to March, the wind currents contain a lot of water vapor originating from Asia and the Pacific Ocean so that the rainy season occurs. The average temperature in Papua ranges from 19 °C to 28 °C and humidity is between 80% and 89%. The average annual rainfall is between 1,500 mm and 7,500 mm. Snowfalls sometime occurs in the mountainous areas of New Guinea, especially the central highlands region.
Various other smaller mountain ranges occur both north and west of the central ranges. Except in high elevations, most areas possess a hot, humid climate throughout the year, with some seasonal variation associated with the northeast monsoon season.
Another major habitat feature is the vast northern lowlands. Stretching for hundreds of kilometers, these include lowland rainforests, extensive wetlands, savanna grasslands, and some of the largest expanses of mangrove forest in the world. The northern lowlands are drained principally by the province's largest river, the Mamberamo River and its tributaries on the western side, and by the Sepik on the eastern side. The result is a large area of lakes and rivers known as the Lakes Plains region.
Anthropologically, New Guinea is considered part of Melanesia. Botanically, New Guinea is considered part of Malesia, a floristic region that extends from the Malay Peninsula across Indonesia to New Guinea and the East Melanesian Islands. The flora of New Guinea is a mixture of many tropical rainforest species with origins in Asia, together with typically Australasian flora. Typical Southern Hemisphere flora include the Conifers Podocarpus and the rainforest emergents Araucaria and Agathis, as well as Tree ferns and several species of Eucalyptus.
New Guinea is differentiated from its drier, flatter, and less fertile southern counterpart, Australia, by its much higher rainfall and its active volcanic geology. Yet the two land masses share a similar animal fauna, with marsupials, including wallabies and possums, and the egg-laying monotreme, the echidna. Other than bats and some two dozen indigenous rodent genera, there are no pre-human indigenous placental mammals. Pigs, several additional species of rats, and the ancestor of the New Guinea singing dog were introduced with human colonization.
The island has an estimated 16,000 species of plant, 124 genera of which are endemic. Papua's known forest fauna includes; marsupials (including possums, wallabies, tree-kangaroos, cuscuses); other mammals (including the endangered long-beaked echidna); bird species such as birds-of-paradise, cassowaries, parrots, and cockatoos; the world's longest lizards (Papua monitor); and the world's largest butterflies.
The waterways and wetlands of Papua are also home to salt and freshwater crocodile, tree monitors, flying foxes, osprey, bats and other animals; while the equatorial glacier fields remain largely unexplored.
Protected areas within Papua province include the World Heritage Lorentz National Park, and the Wasur National Park, a Ramsar wetland of international importance. Birdlife International has called Lorentz Park "probably the single most important reserve in New Guinea". It contains five of World Wildlife Fund's "Global 200" ecoregions: Southern New Guinea Lowland Forests; New Guinea Montane Forests; New Guinea Central Range Subalpine Grasslands; New Guinea mangroves; and New Guinea Rivers and Streams. Lorentz Park contains many unmapped and unexplored areas, and is certain to contain many species of plants and animals as yet unknown to Western science. Local communities' ethnobotanical and ethnozoological knowledge of the Lorentz biota is also very poorly documented. On the other hand, Wasur National Park has a very high value biodiversity has led to the park being dubbed the "Serengeti of Papua". About 70% of the total area of the park consists of savanna (see Trans-Fly savanna and grasslands), while the remaining vegetation is swamp forest, monsoon forest, coastal forest, bamboo forest, grassy plains and large stretches of sago swamp forest. The dominant plants include Mangroves, Terminalia, and Melaleuca species. The park provides habitat for a large variety of up to 358 bird species of which some 80 species are endemic to the island of New Guinea. Fish diversity is also high in the region with some 111 species found in the eco-region and a large number of these are recorded from Wasur. The park's wetland provides habitat for various species of lobster and crab as well.
Several parts of the province remains unexplored due to steep terrain, leaving a high possibility that there are still many undiscovered floras and faunas that is yet to be discovered. In February 2006, a team of scientists exploring the Foja Mountains, Sarmi, discovered new species of birds, butterflies, amphibians, and plants, including possibly the largest-flowered species of rhododendron. In December 2007, a second scientific expedition was taken to the mountain range. The expedition led to the discovery of two new species: the first being a 1.4 kg giant rat (Mallomys sp.) approximately five times the size of a regular brown rat, the second a pygmy possum (Cercartetus sp.) described by scientists as "one of the world's smallest marsupials." An expedition late in 2008, backed by the Indonesian Institute of Sciences, National Geographic Society and Smithsonian Institution, was made in order to assess the area's biodiversity. New types of animals recorded include a frog with a long erectile nose, a large woolly rat, an imperial-pigeon with rust, grey and white plumage, a 25 cm gecko with claws rather than pads on its toes, and a small, 30 cm high, black forest wallaby (a member of the genus Dorcopsis).
Ecological threats include logging-induced deforestation, forest conversion for plantation agriculture (including oil palm), smallholder agricultural conversion, the introduction and potential spread of alien species such as the crab-eating macaque which preys on and competes with indigenous species, the illegal species trade, and water pollution from oil and mining operations.
Papua GDP share by sector (2005)
Papua is reported to be one of Indonesia's poorest regions. The province is rich in natural resources but has weaknesses namely in limited infrastructure and less skilled human resources. So far, Papua has had a fairly good economic development due to the support of economic sources, especially mining, forest, agriculture and fisheries products. Economic development has been uneven in Papua, and poverty in the region remains high by Indonesian standards. Part of the problem has been neglect of the poor—too little or the wrong kind of government support from Jakarta and Jayapura. A major factor in this is the extraordinarily high cost of delivering goods and services to large numbers of isolated communities, in the absence of a developed road or river network (the latter in contrast to Kalimantan) providing access to the interior and the highlands. Intermittent political and military conflict and tight security controls have also contributed to the problem but with the exception of some border regions and a few pockets in the highlands, this has not been the main factor contributing to underdevelopment.
Papua's gross domestic product grew at a faster rate than the national average until, and throughout the financial crisis of 1997–98. However, the differences are much smaller if mining is excluded from the provincial GDP. Given that most mining revenues were commandeered by the central government until the Special Autonomy Law was passed in 2001, provincial GDP without mining is most likely a better measure of Papuan GDP during the pre- and immediate post-crisis periods. On a per capita basis, the GDP growth rates for both Papua and Indonesia are lower than those for total GDP. However, the gap between per capita GDP and total GDP is larger for Papua than for Indonesia as a whole, reflecting Papua's high population growth rates.
Although Papua has experienced almost no growth in GDP, the situation is not as serious as one might think. It is true that the mining sector, dominated by Freeport Indonesia, has been declining over the last decade or so, leading to a fall in the value of exports. On the other hand, government spending and fixed capital investment have both grown, by well over 10 per cent per year, contributing to growth in sectors such as finance, construction, transport and communications, and trade, hotels and restaurants. With so many sectors still experiencing respectable levels of growth, the impact of the stagnant economy on the welfare of the population will probably be limited. It should also be remembered that mining is typically an enclave activity; its impact on the general public is fairly limited, regardless of whether it is booming or contracting.
Papua has depended heavily on natural resources, especially the mining, oil and gas sectors, since the mid-1970s. Although this is still the case, there have been some structural changes in the two provincial economies since the split in 2003. The contribution of mining to the economy of Papua province declined from 62 per cent in 2003 to 47 per cent in 2012. The shares of agriculture and manufacturing also fell, but that of utilities remained the same. A few other sectors, notably construction and services, increased their shares during the period. Despite these structural changes, the economy of Papua province continues to be dominated by the mining sector, and in particular by a single company, Freeport indonesia.
Mining is still and remains one of the dominant economic sector in Papua. The Grasberg Mine, the world's largest gold mine and second-largest copper mine, is located in the highlands near Puncak Jaya, the highest mountain in Papua and whole Indonesia. Grasberg Mine producing 1.063 billion pounds of copper, 1.061 million ounces gold and 2.9 million ounces silver. It has 19,500 employees operated by PT Freeport Indonesia (PT-FI) which used to be 90.64% owned by Freeport-McMoran (FCX). In August 2017, FCX announced that it will divest its ownership in PT-FI so that Indonesia owns 51%. In return the CoW will be replaced by a special license (IUPK) with mining rights to 2041 and FCX will build a new smelter by 2022.
Besides mining, there are at least three other important economic sectors (excluding the government sector) in the Papuan economy. The first is agriculture, particularly food crops, forestry and fisheries. Agriculture made up 10.4 per cent of provincial GDP in 2005 but grew at an average rate of only 0.1 per cent per annum in 2000–05. The second important sector is trade, hotels and restaurants, which contributed 4.0 per cent of provincial GDP in 2005. Within this sector, trade contributed most to provincial GDP. However, the subsector with the highest growth rate was hotels, which grew at 13.2 per cent per annum in 2000–05. The third important sector is transport and Communications, which contributed 3.4 per cent of provincial GDP in 2005. The sector grew at an average annual rate of 5.3 percent in 2000–05, slightly below the national level. Within the sector, sea transport, air transport and communications performed particularly well. The role of private enterprise in developing communications and air transport has become increasingly significant. Since private enterprise will only expand if businesspeople see good prospects to make a profit, this is certainly an encouraging development. At current rates of growth, the transport and communications sector could support the development of agriculture in Papua. However, so far, most of the growth in communications has been between the rapidly expanding urban areas of Jayapura, Timika, Merauke, and between them and the rest of Indonesia. Nevertheless, in the medium term, improved communication networks may create opportunities for Papua to shift from heavy dependence on the mining sector to greater reliance on the agricultural sector. With good international demand for palm oil anticipated in the medium term, production of this commodity could be expanded. However, the negative effects of deforestation on the local environment should be a major consideration in the selection of new areas for this and any other plantation crop. In 2011, Papuan caretaker governor Syamsul Arief Rivai claimed Papua's forests cover 42 million hectares with an estimated worth of Rp 700 trillion ($78 billion) and that if the forests were managed properly and sustainably, they could produce over 500 million cubic meters of logs per annum.
Manufacturing and banking make up a tiny proportion of the regional economy and experienced negative growth in 2000–05. Poor infrastructure and lack of human capital are the most likely reasons for the poor performance of manufacturing. In addition, the costs of manufacturing are typically very high in Papua, as they are in many other outer island regions of Indonesia. Both within Indonesia and in the world economy, Papua's comparative advantage will continue to lie in agriculture and natural resource-based industries for a long time to come. A more significant role for manufacturing is unlikely given the far lower cost of labor and better infrastructure in Java. But provided that there are substantial improvements in infrastructure and communications, over the longer term manufacturing can be expected to cluster around activities related to agriculture—for example, food processing.
Compared to other parts of Indonesia, the infrastructure in Papua is one of the most least developed, owing to its distance from the national capital Jakarta. Nevertheless, for the past few years, the central government has invested significant sums of money to build and improve the current infrastructure in the province. The infrastructure development efforts of the Ministry of Public Works and Housing in Papua have been very massive in the last 10 years. This effort is carried out to accelerate equitable development and support regional development in Papua. The main focus of infrastructure development in Papua is to improve regional connectivity, improve the quality of life through the provision of basic infrastructure and increase food security through the development of water resources infrastructure. The achievements and conditions of infrastructure development in Papua until 2017 have shown significant progress.
Electricity distribution in the province as well as the whole country is operated and managed by the Perusahaan Listrik Negara (PLN). Originally, most Papuan villages do not have access to electricity. The Indonesia government through the Ministry of Energy and Mineral Resources, in the beginning of year 2016, introduced a program named "Indonesia Terang" or Bright Indonesia. The aimed of this program is to speed up Electrification Rate (ER) with priority to the six provinces at Eastern area of Indonesia including Papua Province. The target of Indonesian's ER by 2019 is 97%. While the Indonesian's national ER already high (88.30%) in 2015, Papua still the lowest ER (45.93%) among the provinces. The scenario to boost up ER in the Eastern area by connected the consumers at villages which not electrified yet to the new Renewable Energy sources.
The percentage of household that were connected to the electricity in Papua (Electrification ratio/ER) is the lowest one among the provinces in Indonesia. Data from the Ministry of Energy and Mineral Resources shows that only Papua Province has ER level below 50% (45.93%) with the national average RE was 88.30%. High ER of more than 85% can be found in the rest of west area of the country. The main reason of lowest ER in Papua is a huge area with landlocked and mountain situation and low density population. Energy consumption in residential sector, 457 GWh in year 2014, contributes the electrification rate in Papua Province. But again, geographic and demographic obstacle made the electrical energy not well dispersed in Papua. The ER levels are usually higher in the coastal area but become low in the mountain area. These can be seen by the formation of new provinces in 2022: Papua Province has an ER of 89.22%, while the former regions of South Papua has an ER of 73.54%, Central Papua has an ER of 47.36%, and Highland Papua has an ER of 12.09%.
All pipes water supply in the province is managed by the Papua Municipal Waterworks (Indonesian: Perusahaan Daerah Air Minum Papua – PDAM Papua ). The supply of clean water is one of the main problem faced by the province, especially during drought seasons. Papua has been named as the province with the worst sanitation in Indonesia, garnering a score of 45 while the national average is 75, due to unhealthy lifestyle habits and a lack of clean water. In response, the government has invested money to build the sufficient infrastructure to hold clean water. Several new dams are also being built by the government throughout the province.
Achieving universal access to drinking water, sanitation and hygiene is essential to accelerating progress in the fields of health, education and poverty alleviation. In 2015, about a quarter of the population used basic sanitation facilities at home, while a third still practiced open defecation. The coverage of improved drinking water sources is much higher, both in households and schools. Inequality based on income and residence levels is stark, demonstrating the importance of integrating equity principles into policy and practice and expanding the coverage of community-based total sanitation programs.
Papua is one of the larger province in Indonesia, but it has the least amount of telecommunications services due to geographic isolation. The deployment of service to the district and to the sub district is still not evenly distributed. The distribution of telecommunication services in Papua is still very uneven. This is indicated by the percentage of the number of telecommunication services and infrastructure whose distribution is centralized in certain areas such as Jayapura. Based on data, the Human Development Index in Papua increases every year but is not accompanied by an increase adequate number of telecommunication facilities.
The Ministry of Communication and Information Technology through the Information Technology Accessibility Agency (BAKTI) has built around 9 base transceiver stations in remote areas of Papua, namely Puncak Jaya Regency and Mamberamo Raya Regency, to connect to internet access. In the early stages, the internet was prioritized to support the continuity of education, health and better public services. To realize connectivity in accordance with government priorities, the Ministry of Communication and Information is determined to reach all districts in the Papua region with high-speed internet networks by 2020. It is planned that all districts in Papua and West Papua will build a fast internet backbone network. There are 31 regencies that have new high-speed internet access to be built.
In late 2019, the government announced the completion of the Palapa Ring project – a priority infrastructure project that aimed to provide access to 4G internet services to more than 500 regencies across Indonesia, Papua included. The project is estimated to have cost US$1.5 billion and comprises 35,000 km (21,747 miles) of undersea fiber-optic cables and 21,000 km (13,000 miles) of land cables, stretching from the westernmost city in Indonesia, Sabang to the easternmost town, Merauke, which is located in Papua. Additionally, the cables also transverse every district from the northernmost island Miangas to the southernmost island, Rote. Through the Palapa Ring, the government can facilitate a network capacity of up to 100 Gbit/s in even the most outlying regions of the country.
So far, air routes have been a mainstay in Papua and West Papua provinces as a means of transporting people and goods, including basic necessities, due to inadequate road infrastructure conditions. This has resulted in high distribution costs which have also increased the prices of various staple goods, especially in rural areas. Therefore, the government is trying to reduce distribution costs by building the Trans-Papua Highway. As of 2016, the Trans-Papua highway that has been connected has reached 3,498 kilometers, with asphalt roads for 2,075 kilometers, while the rest are still dirt roads, and roads that have not been connected have reached 827 km. The development of the Trans-Papua highway will create connectivity between regions so that it can have an impact on the acceleration of economic growth in Papua and West Papua in the long term. Apart from the construction of the Trans-Papua highway, the government is also preparing for the first railway development project in Papua, which is currently entering the feasibility study phase. The said infrastructure funding for Papua is not insignificant. The need to connect all roads in Papua and West Papua is estimated at Rp. 12.5 trillion (US$870 million). In the 2016 State Budget, the government has also budgeted an additional infrastructure development fund of Rp. 1.8 trillion (US$126 million).
Data from the Ministry of Public Works and Housing (KPUPR) states, the length of the Trans-Papua highway in Papua reaches 2,902 km. These include Merauke-Tanahmerah-Waropko (543 km), Waropko-Oksibil (136 km), Dekai-Oksibil (225 km), and Kenyam-Dekai (180 km). Then, Wamena-Habema-Kenyam-Mamug (295 km), Jayapura-Elelim-Wamena (585 km), Wamena-Mulia-Ilaga-Enarotali (466 km), Wagete-Timika (196 km), and Enarotali-Wagete-Nabire (285 km). As of 2020, only about 200–300 kilometers of the Trans-Papua highwat have not been connected.
As in other provinces in Indonesia, Papua uses a dual carriageway with the left-hand traffic rule, and cities and towns such as Jayapura and Merauke provide public transportation services such as buses and taxis along with Gojek and Grab services. Currently, the Youtefa Bridge in Jayapura is the longest bridge in the province, with a total length of 732 metres (2,402 ft). The bridge cut the distance and travel time from Jayapura city center to Muara Tami district as well as Skouw State Border Post at Indonesia–Papua New Guinea border. The bridge construction was carried out by consortium of state-owned construction companies PT Pembangunan Perumahan Tbk, PT Hutama Karya (Persero), and PT Nindya Karya (Persero), with a total construction cost of IDR 1.87 trillion and support from the Ministry of Public Works and Housing worth IDR 1.3 trillion. The main span assembly of the Youtefa Bridge was not carried out at the bridge site, but at PAL Indonesia shipyard in Surabaya, East Java. Its production in Surabaya aims to improve safety aspects, improve welding quality, and speed up the implementation time to 3 months. This is the first time where the arch bridge is made elsewhere and then brought to the location. From Surabaya the bridge span, weighing 2000 tons and 112.5 m long, was sent by ship with a 3,200 kilometers journey in 19 days. Installation of the first span was carried out on 21 February 2018, while the second span was installed on 15 March 2018 with an installation time of approximately 6 hours. The bridge was inaugurated on 28 October 2019 by President Joko Widodo.
A railway with a length of 205 km is being planned, which would connect the provincial capital Jayapura and Sarmi to the east. Further plans include connecting the railway to Sorong and Manokwari in West Papua. In total, the railway would have a length of 595 km, forming part of the Trans-Papua Railway. Construction of the railway is still in the planning stage. A Light Rapid Transport (LRT) connecting Jayapura and Sentani is also being planned.
The geographical conditions of Papua which are hilly and have dense forests and do not have adequate road infrastructure, such as in Java or Sumatra, make transportation a major obstacle for local communities. Air transportation using airplanes is by far the most effective means of transportation and is needed most by the inhabitants of the island, although it is not cheap for it. A number of airlines are also scrambling to take advantage of the geographical conditions of the island by opening busy routes to and from a number of cities, both district and provincial capitals. If seen from the sufficient condition of the airport infrastructure, there are not a few airports that can be landed by jets like Boeing and Airbus as well as propeller planes such as ATR and Cessna.
Sentani International Airport in Jayapura is the largest airport in the province, serving as the main gateway to the province from other parts of Indonesia. The air traffic is roughly divided between flights connecting to destinations within the Papua province and flights linking Papua to other parts of Indonesia. The airport connects Jayapura with other Indonesian cities such as Manado, Makassar, Surabaya and Jakarta, as well as towns within the province such as Biak, Timika and Merauke. Sentani International Airport is also the main base for several aviation organizations, including Associated Mission Aviation, Mission Aviation Fellowship, YAJASI and Tariku Aviation. The airport currently does not have any international flights, although there are plans to open new airline routes to neighboring Papua New Guinea in the future. Other medium-sized airports in the province are Mozes Kilangin Airport in Timika, Mopah International Airport in Merauke, Frans Kaisiepo International Airport in Biak, and Wamena Airport in Wamena. There are over 300 documented airstrips in Papua, consisting of mostly small airstrips that can only be landed by small airplanes. The government is planning to open more airports in the future to connect isolated regions in the province.
Water transportation, which includes sea and river transportation, is also one of the most crucial form of transportation in the province, after air transportation. The number of passengers departing by sea in Papua in October 2019 decreased by 16.03 percent, from 18,785 people in September 2019 to 15,773 people. The number of passengers arriving by sea in October 2019 decreased by 12.32 percent, from 11,108 people in September 2019 to 9,739 people. The volume of goods loaded in October 2019 was recorded at 17,043 tons, an increase of 30.57 percent compared to the volume in September 2019 which amounted to 13,053 tons. The volume of goods unloaded in October 2019 was recorded at 117,906 tons or a decrease of 2.03 percent compared to the volume in September 2019 which amounted to 120,349 tons.
There are several ports in the province, with the Port of Depapre in Jayapura being the largest, which started operation in 2021. There are also small to medium-sized ports in Biak, Timika, Merauke and Agats, which serves passenger and cargo ships within the province, as well as from other Indonesian provinces.
Health-related matters in the Papua is administered by the Papua Provincial Health Agency (Indonesian: Dinas Kesehatan Provinsi Papua). According to the Indonesian Central Agency on Statistics, as of 2015, there are around 13,554 hospitals in Papua which consists of 226 state-owned hospitals and 13,328 private hospitals. Furthermore, there are 394 clinics spread throughout the province. The most prominent hospital is the Papua Regional General Hospital (Indonesian: Rumah Sakit Umum Daerah Papua) in Jayapura, which is the largest state-owned hospital in the province.
Papua is reported to have the highest rates of child mortality and HIV/AIDS in Indonesia. Lack of good healthcare infrastructure is one of the main issues in Papua as of today, especially in the remote regions, as most hospitals that have adequate facilities are only located at major cities and towns. A measles outbreak and famine killed at least 72 people in Asmat regency in early 2018, during which 652 children were affected by measles and 223 suffered from malnutrition.
Education in Papua, as well as Indonesia in a whole, falls under the responsibility of the Ministry of Education and Culture (Kementerian Pendidikan dan Kebudayaan or Kemdikbud) and the Ministry of Religious Affairs (Kementerian Agama or Kemenag). In Indonesia, all citizens must undertake twelve years of compulsory education which consists of six years at elementary level and three each at middle and high school levels. Islamic schools are under the responsibility of the Ministry of Religious Affairs. The Constitution also notes that there are two types of education in Indonesia: formal and non-formal. Formal education is further divided into three levels: primary, secondary and tertiary education. Indonesians are required to attend 12 years of school, which consists of three years of primary school, three years of secondary school and three years of high school.
As of 2015, there are 3 public universities and 40 private universities in Papua. Public universities in Papua fall under the responsibility of the Ministry of Research and Technology (Kementerian Riset dan Teknologi) as well as the Ministry of Education and Culture. The most famous university in the province is the Cenderawasih University in Jayapura. The university has faculties in economics, law, teacher training and education, medical, engineering, and social and political science. Until 2002 the university had a faculty of agricultural sciences at Manokwari, which was then separated to form the Universitas Negeri Papua.
While the Papuan branch of the Central Agency on Statistics had earlier projected the 2020 population of the province to be 3,435,430 people the actual census in 2020 revealed a total population of 4,303,707, spread throughout 28 regencies and 1 administrative city. The city of Jayapura is the most populated administrative division in the province, with a total of 398,478 people in 2020, while Supiori Regency, which comprises mainly the island of Supiori, one of the Schouten Islands within Cenderawasih Bay off the north coast of Papua, is the least populated administrative division in the province, with just 22,547 people. Most of the population in the province are concentrated in coastal regions, especially around the city of Jayapura and its suburbs. Papua is also home to many migrants from other parts of Indonesia, of which an overwhelming percentage of these migrants came as part of a government-sponsored transmigration program. The transmigration program in Papua was only formally halted by President Joko Widodo in June 2015.
In contrast to other Indonesian provinces, which are mostly dominated by Austronesian peoples, Papua and West Papua as well as some part of Maluku are home to the Melanesians. The indigenous Papuans which are part of the Melanesians forms the majority of the population in the province. Many believe human habitation on the island dates to as early as 50,000 BC, and first settlement possibly dating back to 60,000 years ago has been proposed. The island of New Guinea is presently populated by almost a thousand different tribal groups and a near-equivalent number of separate languages, which makes it the most linguistically diverse area in the world. Current evidence indicates that the Papuans (who constitute the majority of the island's peoples) are descended from the earliest human inhabitants of New Guinea. These original inhabitants first arrived in New Guinea at a time (either side of the Last Glacial Maximum, approx 21,000 years ago) when the island was connected to the Australian continent via a land bridge, forming the landmass of Sahul. These peoples had made the (shortened) sea-crossing from the islands of Wallacea and Sundaland (the present Malay Archipelago) by at least 40,000 years ago.
The ancestral Austronesian peoples are believed to have arrived considerably later, approximately 3,500 years ago, as part of a gradual seafaring migration from Southeast Asia, possibly originating in Taiwan. Austronesian-speaking peoples colonized many of the offshore islands to the north and east of New Guinea, such as New Ireland and New Britain, with settlements also on the coastal fringes of the main island in places. Human habitation of New Guinea over tens of thousands of years has led to a great deal of diversity, which was further increased by the later arrival of the Austronesians and the more recent history of European and Asian settlement.
Papuan is also home to ethnic groups from other part of Indonesia, including the Javanese, Sundanese, Balinese, Batak, etc. Most of these migrants came as part of the transmigration program, which was an initiative of the Dutch colonial government and later continued by the Indonesian government to move landless people from densely populated areas of Indonesia to less populous areas of the country. The program was accused of fuelling marginalisation and discrimination of Papuans by migrants, and causing fears of the "Javanisation" or "Islamisation" of Papua. There is open conflict between migrants, the state, and indigenous groups due to differences in culture—particularly in administration, and cultural topics such as nudity, food and sex. The transmigration program in Papua was stopped in 2015 due to the controversies it had caused.
Papua, the easternmost region of the Indonesian archipelago, exhibits a very complex linguistic network. The diversity of languages and the situation of multilingualism is very real. There are many language families scattered in this wide area, namely the Austronesian language family and numerous non-Austronesian languages known collectively as Papuan languages. Speakers of different Austronesian languages are found in coastal communities, such as Biak, Wandamen, Waropen and Ma'ya. On the other hand, Papuan languages are spoken in the interior and Central Highlands, starting from the Bird's Head Peninsula in the west to the eastern tip of the island of New Guinea, for example Meybrat, Dani, Ekari, Asmat, Muyu and Sentani language.
At this time, research efforts to find out how many indigenous languages in Papua are still being pursued. Important efforts regarding documentation and inventory of languages in Papua have also been carried out by two main agencies, namely SIL International and the Language and Book Development Agency in Jakarta. The results of the research that have been published by the two institutions show that there are differences in the number of regional languages in Papua. The Language and Book Development Agency as the official Indonesian government agency has announced or published that there are 207 different regional languages in Papua, while SIL International has stated that there are 271 regional languages in the region. Some of the regional languages of Papua are spoken by a large number of speakers and a wide spread area, some are supported by a small number of speakers and are scattered in a limited environment. However, until now it is estimated that there are still a number of regional languages in Papua that have not been properly studied so that it is not known what the form of the language is. In addition to local languages that have been listed by the two main institutions above, there are also dozens more languages from other islands due to population migration that is not included in the list of local languages in Papua, for example languages from Sulawesi (Bugis, Makassar, Toraja, Minahasa), Javanese from Java, and local languages from Maluku. So-called Papuan languages comprise hundreds of different languages, most of which are not related.
As in other provinces, Indonesian is the official language of the state, as well as the province. Indonesian is used in inter-ethnic communication, usually between native Papuans and non-Papuan migrants who came from other parts of Indonesia. Most formal education, and nearly all national mass media, governance, administration, judiciary, and other forms of communication in Papua, are conducted in Indonesian. A Malay-based creole language called Papuan Malay is used as the lingua franca in the province. It emerged as a contact language among tribes in Indonesian New Guinea for trading and daily communication. Nowadays, it has a growing number of native speakers. More recently, the vernacular of Indonesian Papuans has been influenced by Standard Indonesian, the national standard dialect. Some linguists have suggested that Papuan Malay has its roots in North Moluccan Malay, as evidenced by the number of Ternate loanwords in its lexicon. Others have proposed that it is derived from Ambonese Malay. A large number of local languages are spoken in the province, and the need for a common lingua franca has been underlined by the centuries-old traditions of inter-group interaction in the form of slave-hunting, adoption, and intermarriage. It is likely that Malay was first introduced by the Biak people, who had contacts with the Sultanate of Tidore, and later, in the 19th century, by traders from China and South Sulawesi. However, Malay was probably not widespread until the adoption of the language by the Dutch missionaries who arrived in the early 20th century and were then followed in this practice by the Dutch administrators. The spread of Malay into the more distant areas was further facilitated by the Opleiding tot Dorpsonderwizer ('Education for village teacher') program during the Dutch colonial era. There are four varieties of Papuan Malay that can be identified, including Serui Malay. A variety of Papuan Malay is spoken in Vanimo, Papua New Guinea near the Indonesian border.
Religion in Papua (2022)
According to Indonesian Citizenship and Civil Registry in 2022, 70.15% of the Papuans identified themselves as Christians, with 64.68% being Protestants and 5.47% being Catholics. 29.56% of the population are Muslims and less than 1% were Buddhists or Hindus. There is also substantial practice of animism, the traditional religion for many Papuans, with many blending animistic beliefs with other religions such as Christianity and Islam. Christianity, including Protestantism and Roman Catholic are mostly adhered by native Papuans and migrants from Maluku, East Nusa Tenggara, North Sulawesi and Bataks of North Sumatra. Islam are mostly adhered by migrants from North Maluku, South Sulawesi (except Torajans), western Indonesia, and some native Papuans. Lastly Hinduism and Buddhism are mostly adhered by Balinese migrants and Chinese-Indonesians respectively.
Islam had been present in Papua since the 15th century, because of interaction with Muslim traders and Moluccan Muslim Sultanates especially the earliest being Bacan. Though there were many earlier theories and folk legends on origin of Islam, sometimes mixed with indigenous folk religion of Fakfak, Kaimana, Bintuni, and Wondama. These include Islamic procession of Hajj pilgrimage that do not go to Meccah, but to Nabi Mountain, near Arguni Bay and Wondama Bay. According to Aceh origins, a Samudra Pasai figure called Tuan Syekh Iskandar Syah was sent to Mesia (Kokas) to preach in Nuu War (Papua), he converted a Papuan called Kriskris by teaching him about Alif Lam Ha (Allah) and Mim Ha Mim Dal (Muhammad), he became Imam and first king of Patipi, Fakfak. Syekh Iskandar brought with him some religious texts, which were copied onto Koba-Koba leaves and wood barks. Syekh Iskandar would return to Aceh bringing the original manuscripts, but before that he would visit Moluccas specifically in Sinisore village. This corresponds with the village's origin of Islam that instead came from Papua. A study by Fakfak government, mentioned another Acehnese figure called Abdul Ghafar who visited Old Fatagar in 1502 under the reign of Rumbati King Mansmamor. He would preach in Onin language (lingua franca of the area at the time) and was buried next to village mosque in Rumbati, Patipi Bay, Fakfak. Based on family account of Abdullah Arfan, the dynasty of Salawati Kingdom, in the 16th century the first Papuan Muslim was Kalewan who married Siti Hawa Farouk, a muballighah from Cirebon, and changed his name to Bayajid who became the ancestor of Arfan clan. Meanwhile, based on oral history of Fakfak and Kaimana, a Sufi by the name of Syarif Muaz al-Qathan from Yaman constructed a mosque in Tunasgain, which was dated using the 8 merbau woods previously used as ceremonial Alif poles for the mosque around every 50 years, to be from 1587. He was also attributed of converting Samay, an Adi Ruler of the royal line of Sran. Islam only grew in the coastal part of Papua especially in the bird head areas, and did not spread to the interior part of the island until Dutch started sending migrants in 1902 and exiled Indonesian leaders in 1910 to Merauke. Muhammadiyah figures were exiled in Papua and in their exile help spread Islam in the region. Later on to help members with education issues, Muhammadiyah only formally sent its teacher in 1933. Islam in the interior highland only spread after 1962, after interaction with teachers and migrants as was the case of Jayawijaya and the case of Dani tribe of Megapura. While in Wamena, conversion of Walesi village in 1977 was attributed to Jamaludin Iribaram, a Papuan teacher from Fakfak. Other smaller indigenous Islamic communities can also be found in Asmat, Yapen, Waropen, Biak, Jayapura, and Manowari.
Missionaries Carl Ottow and Johann Geisler, under the initiative of Ottho Gerhard Heldring and permission from Tidore Sultanate, are the first Christian missionaries that reached Papua. They entered Papua at Mansinam Island, near Manokwari on 5 February 1855. Since 2001, the fifth of February has been a Papuan public holiday, recognizing this first landing. In 1863, sponsored by the Dutch colonial government, the Utrecht Mission Society (UZV) started a Christian-based education system as well as regular church services in Western New Guinea. Initially the Papuans' attendance was encouraged using bribes of betel nut and tobacco, but subsequently this was stopped. In addition, slaves were bought to be raised as step children and then freed. By 1880, only 20 Papuans had been baptized, including many freed slaves. The Dutch government established posts in Netherlands New Guinea in 1898, a move welcomed by the missionaries, who saw orderly Dutch rule as the essential antidote to Papua paganism. Subsequently, the UZV mission had more success, with a mass conversion near Cenderawasih Bay in 1907 and the evangelization of the Sentani people by Pamai, a native Papuan in the late 1920s. Due to the Great Depression, the mission suffered a funding shortfall, and switched to native evangelists, who had the advantage of speaking the local language (rather than Malay), but were often poorly trained. The mission extended in the 1930s to Yos Sudarso Bay, and the UZV mission by 1934 had over 50,000 Christians, 90% of them in North Papua, the remainder in West Papua. By 1942 the mission had expanded to 300 schools in 300 congregations. The first Catholic presence in Papua was in Fakfak, a Jesuit mission in 1894. In 1902 the Vicariate of Netherlands New Guinea was established. Despite the earlier activity in Fakfak, the Dutch restricted the Catholic Church to the southern part of the island, where they were active especially around Merauke. The mission campaigned against promiscuity and the destructive practices of headhunting among the Marind-anim. Following the 1918 flu pandemic, which killed one in five in the area, the Dutch government agreed to the establishment of model villages, based on European conditions, including wearing European clothes, but which the people would submit to only by violence. In 1925 the Catholics sought to re-establish their mission in Fakfak; permission was granted in 1927. This brought the Catholics into conflict with the Protestants in North Papua, who suggested expanding to South Papua in retaliation.
The native Papuan people has a distinct culture and traditions that cannot be found in other parts of Indonesia. Coastal Papuans are usually more willing to accept modern influence into their daily lives, which in turn diminishes their original culture and traditions. Meanwhile, most inland Papuans still preserves their original culture and traditions, although their way of life over the past century are tied to the encroachment of modernity and globalization. Each Papuan tribe usually practices their own tradition and culture, which may differ greatly from one tribe to another.
The Ararem tradition is the tradition of delivering the dowry of a future husband to the family of the prospective wife in the Biak custom. In the Biak language, the word "Ararem" means dowry. In this procession, the bride and groom will be escorted on foot in a procession, accompanied by songs and dances accompanied by music and. The amount of the dowry is determined by the woman's family as agreed by her relatives. The date of submission of the dowry must be agreed upon by the family of the woman or the family of the prospective wife and the family of the man or family of the prospective husband. In the tradition of the Biak people, the payment of the dowry is a tradition that must be obeyed because it involves the consequences of a marriage.
There are a lot of traditional dances that are native to the province of Papua. Each Papuan tribe would usually have their own unique traditional dances.
The Yospan dance (Indonesian: Tarian Yospan) is a type of social association dance in Papua which is a traditional dance originating from the coastal regions of Papua, namely Biak, Yapen and Waropen, which are often played by the younger people as a form of friendship. Initially, the Yospan dance originated from two dances called Yosim and Pancar, which were eventually combined into one. Hence, Yospan is an acronym of Yosim and Pancar. When performing the Yosim dance, which originated from Yapen and Waropen, the dancers invited other residents to be immersed in the songs sung by a group of singers and music instrument holders. The musical instruments used are simple, which consists of ukulele and guitar, musical instruments that are not native to Papua. There is also a tool that functions as a bass with three ropes. The rope is usually made from rolled fibers, a type of pandanus leaf, which can be found in the forests of the coastal areas of Papua. A music instrument called Kalabasa is also played during the dance, it is made of dried Calabash, then filled with beads or small stones that are played by simply shaking it. The women dancers wear woven sarongs to cover their chests, decorative heads with flowers and bird feathers. Meanwhile, the male dancers would usually wear shorts, open chest, head also decorated with bird feathers. The Pancar dance that originated from Biak is only accompanied by a tifa, which is the traditional musical instrument of the coastal tribes in Papua.
The Isosolo dance is a type of dance performed by the inhabitants who lives around Lake Sentani in Jayapura. The Isosolo dance is performed to symbolize the harmony between different tribes in Papua. The art of boat dancing is a tradition of the Papuan people, especially among the Sentani people, where the dance is performed from one village to another. According to the Sentani language, Isosolo or Isolo dance is a traditional art of the Sentani people who dance on a boat on Lake Sentani. The word Isosolo consists of two words, iso and solo (or holo). Iso means to rejoice and dance to express feelings of the heart, while holo means a group or herd from all age groups who dance. Hence, isosolo means a group of people who dance with joy to express their feelings. The Isosolo dance in Sentani is usually performed by ondofolo (traditional leaders) and the village community to present a gift to other ondofolo. Items that are offered are items that are considered valuable, such as large wild boar, garden products, delivering ondofolo girls to be married, and several other traditional gifts. However, at this time, apart from being a form of respect for ondoafi, isosolo is considered more as a performance of the Sentani people's pride which is one of the popular attractions at the Lake Sentani Festival, which is held annually.
Each Papuan tribe usually has their own war dance. The Papuan war dance is one of the oldest dances of the Papuan people because this classical dance has been around for thousands of years and is even one of the legacies of Indonesia's prehistoric times. In Papuan culture, this dance is a symbol of how strong and brave the Papuan people are. Allegedly, this dance was once a part of traditional ceremonies when fighting other tribes.
Another traditional dance that is common to most if not all Papuan tribes is called musyoh. The emergence of the musyoh dance is based on a certain history. In ancient times, when a Papuan tribe member died due to an accident or something unexpected, the Papuan people believed that the spirit of the person who died was still roaming and unsettled. To overcome this, the Papuan tribesmen created a ritual in the form of the musyoh dance. Thus, this traditional dance is often referred to as a spirit exorcism dance. Generally, the musyoh dance is performed by men. However, besides the purpose of exorcising spirits, the musyoh dance is also used by the Papuan people for another purpose, such as welcoming guests. The musyoh dance is a symbol of respect, gratitude, and an expression of happiness in welcoming guests. If it is for the purpose of expelling the spirit, this musyoh dance is performed by men. In the case for welcoming guests, this dance is performed by men and women. The costumes worn by the dancers can be said to be very simple costumes. This simplicity can be seen from its very natural ingredients, namely processed tree bark and plant roots. The material is then used as a head covering, tops and bottoms, bracelets and necklaces. There are also unique scribbles on the dancers' bodies that show the uniqueness of the dance.
The kariwari is one of the traditional Papuan houses, more precisely the traditional house of the Tobati-Enggros people who live around Yotefa Bay and Lake Sentani near Jayapura. Unlike other forms of Papuan traditional houses, such as the round honai, the kariwari is usually constructed in the shape of an octagonal pyramid. Kariwari are usually made of, bamboo, iron wood and forest sago leaves. The Kariwari house consists of two floors and three rooms or three rooms, each with different functions. The kariwari is not like a honai that can be lived in by anyone, it cannot even be the residence of a tribal chief – unlike the honai which has political and legal functions. The kariwari is more specific as a place of education and worship, therefore the position of the Kariwari in the community of the Tobati-Enggros people is considered a sacred and holy place. Like traditional houses in general, the kariwari also has a design that is full of decorative details that make it unique, of course, the decorations are related to Papuan culture. especially from the Tobati-Enggros. The decorations found in the kariwari are usually in the form of works of art, among others; paintings, carvings and also sculptures. Apart from being decorated with works of art, the kariwari is also decorated with various weapons, such as; bow and arrow. There are also some skeletons of prey animals, usually in the form of wild boar fangs, kangaroo skeletons, turtle or turtle shells, birds-of-paradise, and so on.
Rumsram is the traditional house of the Biak Numfor people on the northern coast of Papua. This house was originally intended for men, while women were prohibited from entering or approaching it. Its function is similar to the kariwari, namely as a place for activities in teaching and educating men who are starting to be teenagers, in seeking life experiences. The building is square with a roof in the shape of an upside down boat because of the background of the Biak Numfor tribe who work as sailors. The materials used are bark for floors, split and chopped water bamboo for walls, while the roof is made of dried sago leaves. The walls are made of sago leaves. The original rumsram wall only had a few windows and its position was at the front and back. A rumsram usually has a height of approximately 6–8 m and is divided into two parts, differentiated by floor levels. The first floor is open and without walls. Only the building columns were visible. In this place, men are educated to learn sculpting, shielding, boat building, and war techniques. In a traditional ceremony called Wor Kapanaknik, which in the Biak language means "to shave a child's hair", a traditional ritual is usually carried out when boys are 6–8 years old. The age when a child is considered to be able to think and the child has started to get education in the search for life experiences, as well as how to become a strong and responsible man as the head of the family later. The children would then enter a rumsram, hence the rite of passage is also called rumsram, because the ritual are carried out in the rumsram.
The cuscus bone skewer is a traditional Papuan weapon used by one of the indigenous Papuan tribes, namely the Bauzi people. The Bauzi people still maintains their tradition of hunting and gathering. The weapon they use to hunt animals while waiting for the harvest to arrive is a piercing tool made of cuscus bones. The use of cuscus bones as a traditional weapon is very environmentally friendly. This happens because in its manufacture, it does not require the help of industrial equipment that pollutes the environment. This traditional weapon is made from cleaned cuscus bone (before the meat is eaten and separated from the bone), sharpened by rubbing it with a whetstone, and repeated so that the desired sharpness is formed.
Papuan knife blades are usually used for slashing or cutting when hunting animals in the forest. Even though the animals they face are large mammals and crocodiles, the Papuan people still adhere to prevailing customs. The custom is that it is not permissible to use any kind of firearm when hunting. Papuan Daggers are knives made of unique materials and are difficult to obtain in other areas, namely the bones of an endemic animal to Papua, the cassowary. Cassowary bones are used by local culture to become a tool that has beneficial values for life. Apart from that, the feathers attached to the blade's handle are also the feathers of the cassowary.
The Papuan spear is referred to by the local community of Sentani as Mensa. The spear was a weapon that could be used for both fighting and hunting. In addition, Papuan culture often uses the spear as a property in dances. The weapons mentioned above are made from basic materials that are easily found in nature. Wood to make the handle, and a river stone that was sharpened as a spearhead. For that reason, the spear is able to survive as a weapon that must be present in hunting and fighting activities. What makes this traditional Papuan weapon feel special is that there is a rule not to use a spear other than for hunting and fighting purposes. For example, it is forbidden to cut young tree shoots with a spear, or to use a spear to carry garden produce. If this rule was broken, the person who wielded this spear would have bad luck. Meanwhile, in the manufacturing process, this spear frame takes a long time. Starting from the wood taken from the tree kayu swang with the diameter of 25 cm. After drying it in the sun, the wood is split to four and shaped so it has rounded cross-section, then the tip is shaped until it formed two-sided and leaf shaped spear-tip.
The bow and arrow is a traditional Papuan weapon locally in Sentani called Fela that has uses for hunting wild boar and other animals. The arrowheads is made from bark of sago tree, the bow is made from a type of wild betel nut tree which can also be made the arrowheads, the shaft is made from a type of grass, small sized bamboo which do not have cavity and rattan as the bowstring. Depending on the phase of for battle there are variety of arrow type, Hiruan is a plain sharp arrow with no decoration to lure the enemy; Humbai is a sharp arrow which have one serrated sided tip and the other plain, used to shoot seen enemy that is getting closer; Hube is an arrow with both sides serrated, used for enemy that is getting closer still; Humame is an arrow with three sided serrated tip, used for a really close enemy; Hukeli is an arrow with four-sided serrated arrowhead, used only after Humame depleted; Pulung Waliman is an arrow with two-sided arrowhead, with three large teeth, and hole in the middle, only used to kill enemy chieftain. In addition, for hunting three kinds of arrows are used, Hiruan which have similar characteristic as war Hiruan other than different shape; Maigue is an arrow with two-pronged tip; and Ka'ai is an arrow with three-pronged tip.
The Papuan parang called Yali made from old swang wood, take 2–3 days to make and can be made before or after drying the wood. It can be used for household purposes, namely cooking, cutting meat, cutting vegetables and cutting down sago. In addition, Papuan machetes are also used in the agricultural industry and be used as a collection. Usually it will have carving symbolizing prosperity for humans or prosperity for animals.
Papuan oars are traditional Papuan tools called Roreng for males and Biareng for females. They are made from swang wood and the bark of sago trees. The wood was split to create flat surface and then shaped like an oar, with the tip made thinner and sharper. It primarily functioned as an oar to propel canoes forward, but under attack from enemies from the seas it can be used as spear because of its sharp tip. Usually oars have ornamental engravings shaped like a finger called Hiokagema to symbolize unity of strength of ten fingers to power the oars.
Papuan Stone Axes from Sentani are called Mamehe usually made from river stones secured to the handle with rattan. Usually it was made from batu pualan (marble) which was then shaped with another stone by chipping slowly. According local tradition the making of the stone have to be done secretly from the family, and can take up to 2 months. For the handle it was constructed using swang wood or ironwood. One part was to secure the axe head and another for the handle, with all parts tied together using rattan. the axe are usually made for cutting down trees and canoes building, however currently used more often as collections.
Tifa is a traditional Papuan musical instrument that is played by beating. Unlike those from Maluku, this musical instrument from Papua is usually longer and has a handle on one part of the instrument. Meanwhile, the tifa from Maluku has a wide size and there is no handle on the side. The material used also comes from the strongest wood, usually the type of Lenggua wood (Pterocarpus indicus) with animal skin as the upper membrane. The animal's skin is tied with rattan in a circle so that it is tight and can produce a beautiful sound. In addition, on the body part of the musical instrument there is a typical Papuan carving. Tifa is usually used to accompany guest welcoming events, traditional parties, dances, etc. The size of the sound that comes out of the drum depends on the size of the instrument. Apart from being a means of accompanying the dance, the tifa also has a social meaning based on the function and shape of the carved ornaments on the body of the tifa. In the culture of the Marind-Anim people in Merauke, each clan has its own shape and motif as well as a name for each tifa. The same goes for the Biak and Waropen people.
The triton is a traditional Papuan musical instrument that is played by blowing it. This musical instrument is found throughout the coast, especially in the Biak, Yapen, Waropen and Nabire. Initially, this tool was only used as a means of communication or as a means of calling and signaling. Currently this instrument is also used as a means of entertainment and traditional musical instruments.
The native Papuan food usually consists of roasted boar with Tubers such as sweet potato. The staple food of Papua and eastern Indonesia in general is sago, as the counterpart of central and western Indonesian cuisines that favour rice as their staple food. Sago is either processed as a pancake or sago congee called papeda, usually eaten with yellow soup made from tuna, red snapper or other fishes spiced with turmeric, lime, and other spices. On some coasts and lowlands on Papua, sago is the main ingredient to all the foods. Sagu bakar, sagu lempeng, and sagu bola, has become dishes that is well known to all Papua, especially on the custom folk culinary tradition on Mappi, Asmat and Mimika. Papeda is one of the sago foods that is rarely found. As Papua is considered as a non-Muslim majority regions, pork is readily available everywhere. In Papua, pig roast which consists of pork and yams are roasted in heated stones placed in a hole dug in the ground and covered with leaves; this cooking method is called bakar batu (burning the stone), and it is an important cultural and social event among Papuan people.
In the coastal regions, seafood is the main food for the local people. One of the famous sea foods from Papua is fish wrap (Indonesian: Ikan Bungkus). Wrapped fish in other areas is called Pepes ikan. Wrapped fish from Papua is known to be very fragrant. This is because there are additional bay leaves so that the mixture of spices is more fragrant and soaks into the fish meat. The basic ingredient of Papuan wrapped fish is sea fish, the most commonly used fish is milkfish. Milkfish is suitable for "wrap" because it has meat that does not crumble after processing. The spices are sliced or cut into pieces, namely, red and bird's eye chilies, bay leaves, tomatoes, galangal, and lemongrass stalks. While other spices are turmeric, garlic and red, red chilies, coriander, and hazelnut. The spices are first crushed and then mixed or smeared on the fish. The wrapping is in banana leaves.
Common Papuan snacks are usually made out of sago. Kue bagea (also called sago cake) is a cake originating from Ternate in North Maluku, although it can also be found in Papua. It has a round shape and creamy color. Bagea has a hard consistency that can be softened in tea or water, to make it easier to chew. It is prepared using sago, a plant-based starch derived from the sago palm or sago cycad. Sagu Lempeng is a typical Papuan snacks that is made in the form of processed sago in the form of plates. Sagu Lempeng are also a favorite for travelers. But it is very difficult to find in places to eat because this bread is a family consumption and is usually eaten immediately after cooking. Making sago plates is as easy as making other breads. Sago is processed by baking it by printing rectangles or rectangles with iron which is ripe like white bread. Initially tasteless, but recently it has begun to vary with sugar to get a sweet taste. It has a tough texture and can be enjoyed by mixing it or dipping it in water to make it softer. Sago porridge is a type of porridge that are found in Papua. This porridge is usually eaten with yellow soup made of mackerel or tuna then seasoned with turmeric and lime. Sago porridge is sometimes also consumed with boiled tubers, such as those from cassava or sweet potato. Vegetable papaya flowers and sautéed kale are often served as side dishes to accompany the sago porridge. In the inland regions, Sago worms are usually served as a type of snack dish. Sago worms come from sago trunks which are cut and left to rot. The rotting stems cause the worms to come out. The shape of the sago worms varies, ranging from the smallest to the largest size of an adult's thumb. These sago caterpillars are usually eaten alive or cooked beforehand, such as stir-frying, cooking, frying and then skewered. But over time, the people of Papua used to process these sago caterpillars into sago caterpillar satay. To make satay from this sago caterpillar, the method is no different from making satay in general, namely on skewers with a skewer and grilled over hot coals. | [
{
"paragraph_id": 0,
"text": "Papua is a province of Indonesia, comprising the northern coast of Western New Guinea together with island groups in Cenderawasih Bay to the west. It roughly follows the borders of Papuan customary region of Tabi Saireri. It is bordered by the sovereign state of Papua New Guinea to the east, the Pacific Ocean to the north, Cenderawasih Bay to the west, and the provinces of Central Papua and Highland Papua to the south. The province also shares maritime boundaries with Palau in the Pacific. Following the splitting off of twenty regencies to create the three new provinces of Central Papua, Highland Papua, and South Papua on 30 June 2022, the residual province is divided into eight regencies (kabupaten) and one city (kota), the latter being the provincial capital of Jayapura. The province has a large potential in natural resources, such as gold, nickel, petroleum, etc. Papua, along with five other Papuan provinces, has a higher degree of autonomy level compared to other Indonesian provinces.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The island of New Guinea has been populated for tens of thousands of years. European traders began frequenting the region around the late 16th century due to spice trade. In the end, the Dutch Empire emerged as the dominant leader in the spice war, annexing the western part of New Guinea into the colony of Dutch East Indies. The Dutch remained in New Guinea until 1962, even though other parts of the former colony has declared independence as the Republic of Indonesia in 1945. Following negotiations and conflicts with the Indonesian government, the Dutch transferred Western New Guinea to a United Nations Temporary Executive Authority (UNTEA), which was again transferred to Indonesia after the controversial Act of Free Choice. The province was formerly called Irian Jaya and comprised the entire Western New Guinea until the inauguration of the province of West Papua (then West Irian Jaya) in 2001. In 2002, Papua adopted its current name and was granted a special autonomous status under Indonesian legislation.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The province of Papua remains one of the least developed provinces in Indonesia. As of 2020, Papua has a GDP per capita of Rp 56.1 million (US$ 3,970), ranking 11th place among all Indonesian provinces. However, Papua only has a Human Development Index of 0.604, the lowest among all Indonesian provinces. The harsh New Guinean terrain and climate is one of the main reasons why infrastructure in Papua is considered to be the most challenging to be developed among other Indonesian regions.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The 2020 census revealed a population of 4,303,707, of which the majority were Christian. The official estimate for mid 2022 was 4,418,581 prior to the division of the province into four separate provinces. The official estimate of the population in mid 2022 of the reduced province was 1,034,956. The interior is predominantly populated by ethnic Papuans while coastal towns are inhabited by descendants of intermarriages between Papuans, Melanesians and Austronesians, including other Indonesian ethnic groups. Migrants from the rest of Indonesia also tend to inhabit the coastal regions. The province is also home to some uncontacted peoples.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Dutch East India Company 1640s–1799 Dutch East Indies 1800–1942; 1944–1949 Empire of Japan 1942–1944 Dutch New Guinea 1949–1962 UNTEA 1962–1963 Indonesia 1963–present",
"title": "History"
},
{
"paragraph_id": 5,
"text": "There are several theories regarding the origin of the word Papua. One theory is that the name comes from the word 'Papo Ua', named by the Tidore Sultanate, which in the Tidore language means \"not joining\" or \"not being united\", meaning that there was no king who rules the area. Before the age of colonization, the Tidore Sultanate controlled some parts of the Bird's Head Peninsula in what is now the provinces of West Papua and Southwest Papua before expanding to also include coastal regions in the current province of Papua. This relationship plays an important historical role in binding the archipelagic civilizations of Indonesia to the Papuan world. Another theory is that the word Papua comes from the Malay word 'papuwah', which means 'frizzled hair'. It was first mentioned in the 1812 Malay Dictionary by William Marsden, although it was not found in earlier dictionaries. In the records of 16th century Portuguese and Spanish sailors, the word 'Papua' is the designation for the inhabitants of the Raja Ampat Islands and the coastal parts of the Bird's Head Peninsula. The former name of the province, Irian Jaya, was suggested during a tribal committee meeting in Tobati, Jayapura, formed by Atmoprasojo, head of the bestuur school in the 1940s. Frans Kaisiepo, the committee leader suggested the name from Mansren Koreri myths, Iri-an from the Biak language of Biak Island, meaning \"hot land\" referring to the local hot climate, but also from Iryan which means heated process as a metaphor for a land that is entering a new era. In Serui Iri-an (lit. land-nation) means \"pillar of nation\", while in Merauke Iri-an (lit. placed higher-nation) means \"rising spirit\" or \"to rise\". The name was promoted in 1945 by Marcus Kaisiepo, brother of the future governor Frans Kaisiepo. The name Irian was politicized later by Marthin Indey and Silas Papare with the Indonesian acronym 'Ikut Republik Indonesia Anti Nederland' (Join the Republic of Indonesia oppose the Netherlands). The name was used throughout the Suharto administration, until it was changed to Papua during the administration of President Abdurrahman Wahid.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Dutch, who arrived later under Jacob Le Maire and Willem Schouten, called it Schouten island. They later used this name only to refer to islands off the north coast of Papua proper, the Schouten Islands or Biak Island. When the Dutch colonized this island as part of the Dutch East Indies, they called it Nieuw Guinea.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Speakers align themselves with a political orientation when choosing a name for the western half of the island of New Guinea. The official name of the region is \"Papua\" according to International Organization for Standardization (ISO). Independence activists refer to the region as \"West Papua,\" while Indonesian officials have also used \"West Papua\" to name the westernmost province of the region since 2007. Historically, the region has had the official names of Netherlands New Guinea (1895–1962), West New Guinea or West Irian (1945–73), Irian Jaya (1973–2002), and Papua (2002–present).",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Papuan habitation of the region is estimated to have begun between 42,000 and 48,000 years ago. Research indicates that the highlands were an early and independent center of agriculture, and show that agriculture developed gradually over several thousands of years; the banana has been cultivated in this region for at least 7,000 years. Austronesian peoples migrating through Maritime Southeast Asia settled in the area at least 3,000 years ago, and populated especially in Cenderawasih Bay. Diverse cultures and languages have developed in the island due to geographical isolation; there are over 300 languages and two hundred additional dialects in the region.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Ghau Yu Kuan, a Chinese merchant, came to Papua around the latter half of 500 AD and referred to it as Tungki, the area where they obtained spices. Meanwhile, in the latter half of 600 AD, the Sumatra-based empire of Srivijaya referred to the island as Janggi. The empire engaged in trade relations with western New Guinea, initially taking items like sandalwood and birds-of-paradise in tribute to China, but later making slaves out of the natives. It was only at the beginning of 700 AD that traders from Persia and Gujarat began to arrive in what is now Papua and called it Dwi Panta or Samudrananta, which means 'at edge of the ocean'.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The 14th-century Majapahit poem Nagarakretagama mentioned Wwanin or Onin and Sran as a recognized territory in the east, today identified as Onin peninsula in Fakfak Regency in the western part of the larger Bomberai Peninsula south of the Bird's Head Peninsula. At that time, Papua was said to be the eighth region of the Majapahit Empire. Wanin or Onin was one of the oldest indigenous names in recorded history to refer to the western part of the island of New Guinea. A transcript from the Nagarakretagama says the following:",
"title": "History"
},
{
"paragraph_id": 11,
"text": "According to some linguists, the word Ewanin is another name for Onin as recorded in old communal poems or songs from Wersar, while Sran popularly misunderstood to refers to Seram Island in Maluku, is more likely another name for a local Papuan kingdom which in its native language is called Sran Eman Muun, based in Kaimana and its furthest influence extends to the Kei Islands, in southeastern Maluku. In his book Nieuw Guinea, Dutch author WC. Klein explained the beginning of the influence of the Bacan Sultanate in Papua. There he wrote: In 1569 Papoese hoof den bezoeken Batjan. Ee aanterijken worden vermeld (In 1569, Papuan tribal leaders visited Bacan, which resulted in the creation of new kingdoms). According to the oral history of the Biak people, there used to be a relationship and marriage between their tribal chiefs and the sultans of Tidore in connection with Gurabesi, a naval leader of Waigeo from Biak. The Biak people is the largest Melanesian tribe, spread on the northern coast of Papua, making the Biak language widely used and considered the language of Papuan unity. Due to the relationship of the coastal areas of Papua with the Sultans of Maluku, there are several local kingdoms on this island, which shows the entry of feudalism.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Since the 16th century, apart from the Raja Ampat Islands which was contested between the Bacan Sultanate, Tidore Sultanate, and Ternate Sultanate, other coastal areas of Papua from the island of Biak to Mimika became vassals of the Tidore Sultanate. The Tidore Sultanate adheres to the trade pact and custom of Uli-Siwa (federation of nine), there were nine trade partners led by Tidore in opposition to the Ternate-led Uli Lima (federation of five). In administering its regions in Papua, Tidore divide them to three regions, Korano Ngaruha ( lit. Four Kings ) or Raja Ampat Islands, Papoua Gam Sio ( lit. Papua The Nine Negeri ) and Mafor Soa Raha ( lit. Mafor The Four Soa ). The role of these kingdoms began to decline due to the entry of traders from Europe to the archipelago marking the beginning of colonialism in the Indonesian Archipelago. During Tidore's rule, the main exports of the island during this period were resins, spices, slaves and the highly priced feathers of the bird-of-paradise. Sultan Nuku, one of the most famous Tidore sultans who rebelled against Dutch colonization, called himself \"Sultan of Tidore and Papua\", during his revolt in the 1780s. He commanded loyalty from both Moluccan and Papuan chiefs, especially those of Raja Ampat Islands. Following Tidore's defeat, much of the territory it claimed in western part of New Guinea came under Dutch rule as part of the Dutch East Indies.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1511, Antonio d'Arbau, a Portuguese sailor, called the Papua region as \"Os Papuas\" or llha de Papo. Don Jorge de Menetes, a sailor from Spain also stopped by in Papua a few years later (1526–1527), he refers to the region as 'Papua', which was mentioned in the diary of Antonio Pigafetta, the clerk for the Magellan voyage. The name Papua was known to Figafetta when he stopped on the island of Tidore. On 16 May 1545, Yñigo Ortiz de Retez, a Spanish maritime explorer in command of the San Juan de Letran, left port in Tidore, a Spanish stronghold in the Maluku Islands and going by way of the Talaud Islands and the Schoutens, reached the northern coast of New Guinea, which was coasted till the end of August when, owing to the 5°S latitude, contrary winds and currents, forcing a return to Tidore arriving on 5 October 1545. Many islands were encountered and first charted, along the northern coast of New Guinea, and in the Padaidos, Le Maires, Ninigos, Kaniets and Hermits, to some of which Spanish names were given. On 20 June 1545 at the mouth of the Mamberamo River (charted as San Agustin) he took possession of the land for the Spanish Crown, in the process giving the island the name by which it is known today. He called it Nueva Guinea owing to the resemblance of the local inhabitants to the peoples of the Guinea coast in West Africa. The first map showing the whole island as an island was published in 1600 and shown 1606, Luís Vaz de Torres explored the southern coast of New Guinea from Milne Bay to the Gulf of Papua including Orangerie Bay, which he named Bahía de San Lorenzo. His expedition also discovered Basilaki Island, naming it Tierra de San Buenaventura, which he claimed for Spain in July 1606. On 18 October, his expedition reached the western part of the island in present-day Indonesia, and also claimed the territory for the King of Spain.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1606, a Duyfken expedition led by the commander Wiliam Jansen from Holland landed in Papua. This expedition consisted of 3 ships, where they sailed from the north coast of Java and stopped at the Kei Islands, at the southwestern coast of Papua. With the increasing Dutch grip in the region, the Spanish left New Guinea in 1663. In 1660, the Dutch recognized the Sultan of Tidore's sovereignty over New Guinea. New Guinea thus became notionally Dutch as the Dutch held power over Tidore.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Dutch New Guinea in the early 19th century was administered from the Moluccas. Although the coast had been mapped in 1825 by Lieutenant Commander D.H. Kolff, there had been no serious effort to establish a permanent presence in Dutch New Guinea. The British, however, had shown considerable interest in the area, and were threatening to settle it. To prevent this, the Governor of the Moluccas, Pieter Merkus, urged the Dutch government to establish posts along the coast. An administrative and trading post established in 1828 on Triton Bay on the southwest coast of New Guinea. On 24 August 1828, the birthday of King William I of the Netherlands, the Dutch flag was hoisted and the Dutch claimed all of Western New Guinea, which they called Nieuw Guinea Several native chieftains proclaimed their loyalty to the Netherlands. The post was named Fort Du Bus for the then-Governor General of the Dutch East Indies, Leonard du Bus de Gisignies. 30 years later, Germans established the first missionary settlement on an island near Manokwari. While in 1828 the Dutch claimed the south coast west of the 141st meridian and the north coast west of Humboldt Bay in 1848, they did not try to develop the region again until 1896; they established settlements in Manokwari and Fak-Fak in response to perceived Australian ownership claims from the eastern half of New Guinea. Great Britain and Germany had recognized the Dutch claims in treaties of 1885 and 1895. At the same time, Britain claimed south-east New Guinea, later as the Territory of Papua, and Germany claimed the northeast, later known as the Territory of New Guinea. The German, Dutch and British colonial administrators each attempted to suppress the still-widespread practices of inter-village warfare and headhunting within their respective territories. In 1901, the Netherlands formally purchased West New Guinea from the Sultanate of Tidore, incorporating it into the Netherlands East Indies.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Dutch activity in the region remained in the first half of the twentieth century, notwithstanding the 1923 establishment of the Nieuw Guinea Beweging (New Guinea Movement) in the Netherlands by ultra right-wing supporters calling for Dutchmen to create a tropical Netherlands in Papua. This pre-war movement without full government support was largely unsuccessful in its drive, but did coincide with the development of a plan for Eurasian settlement of the Dutch Indies to establish Dutch farms in northern West New Guinea. This effort also failed as most returned to Java disillusioned, and by 1938 just 50 settlers remained near Hollandia and 258 in Manokwari. The Dutch established the Boven Digul camp in Tanahmerah, as a prison for Indonesian nationalists. Among those interned here were writer Marco Kartodikromo, Mohammad Hatta, who would become the first vice president of Indonesia, and Sutan Sjahrir, the first Indonesian Prime Minister.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Before about 1930, European maps showed the highlands as uninhabited forests. When first flown over by aircraft, numerous settlements with agricultural terraces and stockades were observed. The most startling discovery took place on 4 August 1938, when Richard Archbold discovered the Grand Valley of the Baliem River, which had 50,000 yet-undiscovered Stone Age farmers living in villages. The people, known as the Dani, were the last society of its size to make first contact with the rest of the world.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The region became important in World War II with the Pacific War upon the Netherlands' declaration of war on Japan after the bombing of Pearl Harbor. In 1942, the northern coast of West New Guinea and the nearby islands were occupied by Japan. By late 1942, most of the Netherlands Indies were occupied by Japan. Behind Japanese lines in New Guinea, Dutch guerrilla fighters resisted under Mauritz Christiaan Kokkelink. Allied forces drove out the Japanese after Operations Reckless and Persecution, with amphibious landings near Hollandia, from 21 April 1944. The area served as General Douglas MacArthur's headquarters until the conquest of the Philippines in March 1945. Over twenty U.S. bases were established and half a million US personnel moved through the area. West New Guinean farms supplied food for the half million US troops. Papuan men went into battle to carry the wounded, acted as guides and translators, and provided a range of services, from construction work and carpentry to serving as machine shop workers and mechanics. Following the end of the war, the Dutch retained possession of West New Guinea from 1945.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 1944, Jan van Eechoud set up a school for bureaucrats in Hollandia (now Jayapura). One early headmaster of the school was Soegoro Atmoprasojo, an Indonesian nationalist graduate of Taman Siswa and former Boven-Digoel prisoners, in one of these meetings the name \"Irian\" was suggested. Many of these school early graduates would go on to found Indonesian independence movement in Western New Guinea, while some went on to support Dutch authorities and pursue Papuan independence. In December 1945, Atmoprasojo alongside his students were planning for a rebellion, however Dutch authorities would be alerted by a defecting member of Papuan Battalion on 14 December 1945, utilising forces from Rabaul, Dutch authorities would also capture 250 people possibly involved in this attack. The news of Indonesian independence proclamation arrived in New Guinea primarily through shipping laborers associated with Sea Transport Union of Indonesia (Sarpelindo), who were working for ships under the flag of Australian and the Dutch. This led to the formation of the Komite Indonesia Merdeka or KIM branch in Abepura, Hollandia in October 1946, originally an organization for Indonesian exiles in Sydney. It was led by Dr. J.A. Gerungan, a woman doctor who led an Abepura hospital, by December 1946, it came to be led by Martin Indey. KIM was one of the first Indonesian nationalist groups in New Guinea, whose members were mostly former associates of Soegoro. Simultaneously another separate Indonesian nationalist movement in New Guinea formed when Dr. Sam Ratulangi, was exiled at Serui, along with his six staff by the Netherlands Indies Civil Administration on 5 July 1946. In exile he met with Silas Papare who was also exiled from a failed Pagoncang Alam led rebellion to free Atmoprasojo, on 29 November 1946, an organization called Indonesian Irian Independence Party (PKII) was formed. A year later, on 17 August 1947, former students of Soegoro and others would held a red and white flag-raising ceremony to commemorate the Indonesian independence day.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "KIM and PKII members began to start movements in other areas of New Guinea, most of these were unsuccessful, and the perpetrators were either imprisoned or killed. In Manokwari, a movement called Red and White Movement (GMP) was founded, which was led by Petrus Walebong and Samuel D. Kawab. This movement later spread to Babo, Kokas, Fakfak, and Sorong. In Biak, a local branch of KIM was joined with Perserikatan Indonesia Merdeka (PIM) which was formed earlier in September 1945 under the leadership of Lukas Rumkorem. Lukas would be captured and exiled to Hollandia, with the charge he instigated violence among local population accused of trying to kill Frans Kaisiepo and Marcus Kaisiepo. Still the movement did not disappear in Biak, Stevanus Yoseph together with Petero Jandi, Terianus Simbiak, Honokh Rambrar, Petrus Kaiwai and Hermanus Rumere on 19 March 1948, instigate another revolt. Dutch authorities had to send reinforcements from Jayapura. The Dutch imposed a harder penalty, with capital punishment for Petro Jandi, and a life sentence to Stevanus Yoseph. Meanwhile, another organization was formed on the 17 August 1947, called the Association of Young Men of Indonesia (PPI) under the leadership of Abraham Koromath.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Around the Bomberai Peninsula area of Fakfak, specifically in Kokas, an Indonesian nationalist movement was led by Machmud Singgirei Rumagesan. On 1 March 1946, he ordered that all the Dutch's flags in Kokas to be changed into Indonesian flags. He was later imprisoned in Doom Island, Sorong, where he managed to recruit some followers as well as the support from local Sangaji Malan Dutch authorities later aided by incoming troops from Sorong arrested the King Rumagesan and he was given capital punishment. Meanwhile, in Kaimana, King Muhammad Achmad Aituarauw founded an organization called Independence With Kaimana, West Irian (MBKIB), which similarly boycotted Dutch flags every 31 August. In response of this activity, Aituarauw was arrested by the Dutch and exiled to Ayamaru for 10 years in 1948. Other movements opposing the Dutch under local Papuan kings includes, New Guinea Islamic Union (KING) led by Ibrahim Bauw, King of Rumbati, Gerakan Pemuda Organisasi Muda led by Machmud Singgirei Rumagesan and Abbas Iha, and Persatuan Islam Kaimana (PIK) of Kaimana led by Usman Saad and King of Namatota, Umbair.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Following the Indonesian National Revolution, the Netherlands formally transferred sovereignty to the United States of Indonesia, on 27 December 1949. However, the Dutch refused to include Netherlands New Guinea in the new Indonesian Republic and took steps to prepare it for independence as a separate country. Following the failure of the Dutch and Indonesians to resolve their differences over West New Guinea during the Dutch-Indonesian Round Table Conference in late 1949, it was decided that the present status quo of the territory would be maintained and then negotiated bilaterally one year after the date of the transfer of sovereignty. However, both sides were still unable to resolve their differences in 1950, which led the Indonesian President Sukarno to accuse the Dutch of reneging on their promises to negotiate the handover of the territory. On 17 August 1950, Sukarno dissolved the United States of Indonesia and proclaimed the unitary Republic of Indonesia. Indonesia also began to initiate incursions to New Guinea in 1952, though most of these efforts would be unsuccessful. Most of these failed infiltrators would be sent to Boven-Digoel which would form clandestine intelligence groups working from the primarily southern part of New Guinea in preparation for war. Meanwhile, following the defeat of the third Afro-Asian resolution in November 1957, the Indonesian government embarked on a national campaign targeting Dutch interests in Indonesia; A total of 700 Dutch-owned companies with a valuation total of around $1.5 billion was nationalised. By January 1958, ten thousand Dutch nationals had left Indonesia, many returning to the Netherlands. By June 1960, around thirteen thousand Dutch nationals mostly Eurasians from New Guinea left for Australia, with around a thousand moving to the Netherlands. Following a sustained period of harassment against Dutch diplomatic representatives in Jakarta, the Indonesian government formally severed relations with the Netherlands in August 1960.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In response to Indonesian aggression, the Netherlands government stepped up its efforts to prepare the Papuan people for self-determination in 1959. These efforts culminated in the establishment of a hospital in Hollandia (modern–day Jayapura, currently Jayapura Regional General Hospital or RSUD Jayapura), a shipyard in Manokwari, agricultural research sites, plantations, and a military force known as the Papuan Volunteer Corps. By 1960, a legislative New Guinea Council had been established with a mixture of legislative, advisory and policy functions. Half of its members were to be elected, and elections for this council were held the following year. Most importantly, the Dutch also sought to create a sense of West Papuan national identity, and these efforts led to the creation of a national flag (the Morning Star flag), a national anthem, and a coat of arms. The Dutch had planned to transfer independence to West New Guinea in 1970.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Following the raising of the Papuan National Flag on 1 December 1961, tensions further escalated. Multiple rebellions erupted inside New Guinea against Dutch authorities, such as in Enarotali, Agats, Kokas, Merauke, Sorong and Baliem Valley. On 18 December 1961 Sukarno issued the Tri Komando Rakjat (People's Triple Command), calling the Indonesian people to defeat the formation of an independent state of West Papua, raise the Indonesian flag in the territory, and be ready for mobilisation at any time. In 1962 Indonesia launched a significant campaign of airborne and seaborne infiltrations against the disputed territory, beginning with a seaborne infiltration launched by Indonesian forces on 15 January 1962. The Indonesian attack was defeated by Dutch forces including the Dutch destroyers Evertsen and Kortenaer, the so-called Vlakke Hoek incident. Amongst the casualties was the Indonesian Deputy Chief of the Naval Staff; Commodore Yos Sudarso.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "It finally was agreed through the New York Agreement in 1962 that the administration of Western New Guinea would be temporarily transferred from the Netherlands to Indonesia and that by 1969 the United Nations should oversee a referendum of the Papuan people, in which they would be given two options: to remain part of Indonesia or to become an independent nation. For a period of time, Dutch New Guinea were under the United Nations Temporary Executive Authority, before being transferred to Indonesia in 1963. A referendum was held in 1969, which was referred locally as Penantuan Pendapat Rakyat (Determination of the People's Opinion) or Act of Free Choice by independence activists. The referendum was recognized by the international community and the region became the Indonesian province of Irian Jaya. The province has been renamed as Papua since 2002.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Following the Act of Free Choice in 1969, Western New Guinea was formally integrated into the Republic of Indonesia. Instead of a referendum of the 816,000 Papuans, only 1,022 Papuan tribal representatives were allowed to vote, and they were coerced into voting in favor of integration. While several international observers including journalists and diplomats criticized the referendum as being rigged, the U.S. and Australia support Indonesia's efforts to secure acceptance in the United Nations for the pro-integration vote. That same year, 84 member states voted in favor for the United Nations to accept the result, with 30 others abstaining. Due to the Netherlands' efforts to promote a West Papuan national identity, a significant number of Papuans refused to accept the territory's integration into Indonesia. These formed the separatist Organisasi Papua Merdeka (Free Papua Movement) and have waged an insurgency against the Indonesian authorities, which continues to this day.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In January 2003 President Megawati Sukarnoputri signed an order dividing Papua into three provinces: Central Irian Jaya (Irian Jaya Tengah), Papua (or East Irian Jaya, Irian Jaya Timur), and West Papua (Irian Jaya Barat). The formality of installing a local government for Jakarta in Irian Jaya Barat (West) took place in February 2003 and a governor was appointed in November; a government for Irian Jaya Tengah (Central Irian Jaya) was delayed from August 2003 due to violent local protests. The creation of this separate Central Irian Jaya Province was blocked by Indonesian courts, who declared it to be unconstitutional and in contravention of the Papua's special autonomy agreement. The previous division into two provinces was allowed to stand as an established fact.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Following his election in 2014, Indonesian president, Joko Widodo, embarked on reforms intended to alleviate grievances of Native Papuans, such as stopping the transmigration program and starting massive infrastructure spending in Papua, including building Trans-Papua roads network. The Joko Widodo administration has prioritized infrastructure and human resource development as a great framework for solving the conflict in Papua. The administration has implemented a one-price fuel policy in Papua, with Jokowi assessing that it is a form of \"justice\" for all Papuans. The administration has also provided free primary and secondary education.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Security forces have been accused of abuses in the region including extrajudicial killings, torture, arrests of activists, and displacements of entire villages. On the other hand separatists have been accused and claimed much of the same violence, such as extrajudicial killings of both Papuan and non-Papuan civilians, torture, rapes, and attacking local villages. Protests against Indonesian rule in Papua happen frequently, the most recent being the 2019 Papua protests, one of the largest and most violent, which include burning of mostly non-Papuan civilians and Papuans that did not want to join the rally.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In July 2022, regencies in central and southern Papua were separated from the province, to be created into three new provinces: South Papua administered from Merauke Regency, Central Papua administered from Nabire Regency, and Highlands Papua administered from Jayawijaya Regency.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The province of Papua is governed by a directly elected governor and a regional legislature, People's Representative Council of Papua (Dewan Perwakilan Rakyat Papua, abbreviated as DPRP or DPR Papua). A unique government organization in the province is the Papuan People's Assembly (Majelis Rakyat Papua), which was formed by the Indonesian government in 2005, as mandated by the Papua Special Autonomy Law, as a coalition of Papuan tribal chiefs, Papuan religious leaders, and Papuan women representatives, tasked with arbitration and speaking on behalf of Papuan tribal customs.",
"title": "Politics"
},
{
"paragraph_id": 32,
"text": "Since 2014, the DPRP has 55 members who are elected through General elections every five years and 14 people who are appointed through the special autonomy, bringing the total number of DPRP members to 69 people. The DPRP leadership consists of 1 Chairperson and 3 Deputy Chairmen who come from political parties that have the most seats and votes. The current DPRP members are the results of the 2019 General Election which was sworn in on 31 October 2019 by the Chairperson of the Jayapura High Court at the Papua DPR Building. The composition of DPRP members for the 2019–2024 period consists of 13 political parties where the Nasdem Party is the political party with the most seats, with 8 seats, followed by the Democratic Party which also won 8 seats and the Indonesian Democratic Party of Struggle which won 7 seats.",
"title": "Politics"
},
{
"paragraph_id": 33,
"text": "The province of Papua is one of six provinces to have obtained special autonomy status, the others being Aceh, West Papua, Central Papua, Highland Papua and South Papua (the Special Regions of Jakarta and Yogyakarta have a similar province-level special status). According to Law 21/2001 on Special Autonomy Status (UU Nomor 21 Tahun 2001 tentang Otonomi khusus Papua), the provincial government of Papua is provided with authority within all sectors of administration, except for the five strategic areas of foreign affairs, security and defense, monetary and fiscal affairs, religion and justice. The provincial government is authorized to issue local regulations to further stipulate the implementation of the special autonomy, including regulating the authority of districts and municipalities within the province. Due to its special autonomy status, Papua province is provided with significant amount of special autonomy funds, which can be used to benefit its indigenous peoples. But the province has low fiscal capacity and it is highly dependent on unconditional transfers and the above-mentioned special autonomy fund, which accounted for about 55% of total revenues in 2008.",
"title": "Politics"
},
{
"paragraph_id": 34,
"text": "After obtaining its special autonomy status, to allow the local population access to timber production benefits, the Papuan provincial government issued a number of decrees, enabling:",
"title": "Politics"
},
{
"paragraph_id": 35,
"text": "As of 2022 (following the separation of Central Papua, Highland Papua, and South Papua province), the residual Papua Province consisted of 8 regencies (kabupaten) and one city (kota); on the map below, these regencies comprise the northern belt from Waropen Regency to Keerom Regency, plus the island groups to their northwest. Initially the area now forming the present Papua Province contained three regencies - Jayapura, Yapen Waropen and Biak Numfor. The City of Jayapura was separated on 2 August 1993 from Jayapura Regency and formed into a province-level administration. On 11 December 2002 three new regencies were created - Keerom and Sarmi from parts of Jayapura Regency, and Waropen from part of Yapen Waropen Regency (the rest of this regency was renamed as Yapen Islands). On 18 December 2003 a further regency - Supiori - was created from part of Biak Numfor Regency, and on 15 March 2007 a further regency - Mamberamo Raya - was created from the western part of Sarmi Regency. These regencies and the city are together subdivided as into districts (distrik), and thence into \"villages\" (kelurahan and desa). With the release of the Act Number 21 of 2001 concerning the Special Autonomous Region of Papua Province, the term distrik was used instead of kecamatan in the entire Western New Guinea. The difference between the two is merely the terminology, with kepala distrik being the district head.",
"title": "Politics"
},
{
"paragraph_id": 36,
"text": "The regencies (kabupaten) and the city (kota) are listed below with their areas and their populations at the 2020 census and subsequent official estimates for mid 2022, together with the 2020 Human Development Index of each administrative divisions.",
"title": "Politics"
},
{
"paragraph_id": 37,
"text": "The island of New Guinea lies to the east of the Malay Archipelago, with which it is sometimes included as part of a greater Indo-Australian Archipelago. Geologically it is a part of the same tectonic plate as Australia. When world sea levels were low, the two shared shorelines (which now lie 100 to 140 metres below sea level), and combined with lands now inundated into the tectonic continent of Sahul, also known as Greater Australia. The two landmasses became separated when the area now known as the Torres Strait flooded after the end of the Last Glacial Period.",
"title": "Environment"
},
{
"paragraph_id": 38,
"text": "The province of Papua is located between 2 ° 25'LU – 9 ° S and 130 ° – 141 ° East. The total area of Papua is now 82,680.95 km (31,923.29 sq. miles). Until its division in 2022 into four provinces, Papua Province was the province that had the largest area in Indonesia, with a total area of 312,816.35 km, or 19.33% of the total area of the Indonesian archipelago. The boundaries of Papua are: Pacific Ocean (North), Highland Papua (South), Central Papua (Southwest) and Papua New Guinea (East). Papua, like most parts of Indonesia, has two seasons, the dry season and the rainy season. From June to September the wind flows from Australia and does not contain much water vapor resulting in a dry season. On the other hand, from December to March, the wind currents contain a lot of water vapor originating from Asia and the Pacific Ocean so that the rainy season occurs. The average temperature in Papua ranges from 19 °C to 28 °C and humidity is between 80% and 89%. The average annual rainfall is between 1,500 mm and 7,500 mm. Snowfalls sometime occurs in the mountainous areas of New Guinea, especially the central highlands region.",
"title": "Environment"
},
{
"paragraph_id": 39,
"text": "Various other smaller mountain ranges occur both north and west of the central ranges. Except in high elevations, most areas possess a hot, humid climate throughout the year, with some seasonal variation associated with the northeast monsoon season.",
"title": "Environment"
},
{
"paragraph_id": 40,
"text": "Another major habitat feature is the vast northern lowlands. Stretching for hundreds of kilometers, these include lowland rainforests, extensive wetlands, savanna grasslands, and some of the largest expanses of mangrove forest in the world. The northern lowlands are drained principally by the province's largest river, the Mamberamo River and its tributaries on the western side, and by the Sepik on the eastern side. The result is a large area of lakes and rivers known as the Lakes Plains region.",
"title": "Environment"
},
{
"paragraph_id": 41,
"text": "Anthropologically, New Guinea is considered part of Melanesia. Botanically, New Guinea is considered part of Malesia, a floristic region that extends from the Malay Peninsula across Indonesia to New Guinea and the East Melanesian Islands. The flora of New Guinea is a mixture of many tropical rainforest species with origins in Asia, together with typically Australasian flora. Typical Southern Hemisphere flora include the Conifers Podocarpus and the rainforest emergents Araucaria and Agathis, as well as Tree ferns and several species of Eucalyptus.",
"title": "Environment"
},
{
"paragraph_id": 42,
"text": "New Guinea is differentiated from its drier, flatter, and less fertile southern counterpart, Australia, by its much higher rainfall and its active volcanic geology. Yet the two land masses share a similar animal fauna, with marsupials, including wallabies and possums, and the egg-laying monotreme, the echidna. Other than bats and some two dozen indigenous rodent genera, there are no pre-human indigenous placental mammals. Pigs, several additional species of rats, and the ancestor of the New Guinea singing dog were introduced with human colonization.",
"title": "Environment"
},
{
"paragraph_id": 43,
"text": "The island has an estimated 16,000 species of plant, 124 genera of which are endemic. Papua's known forest fauna includes; marsupials (including possums, wallabies, tree-kangaroos, cuscuses); other mammals (including the endangered long-beaked echidna); bird species such as birds-of-paradise, cassowaries, parrots, and cockatoos; the world's longest lizards (Papua monitor); and the world's largest butterflies.",
"title": "Environment"
},
{
"paragraph_id": 44,
"text": "The waterways and wetlands of Papua are also home to salt and freshwater crocodile, tree monitors, flying foxes, osprey, bats and other animals; while the equatorial glacier fields remain largely unexplored.",
"title": "Environment"
},
{
"paragraph_id": 45,
"text": "Protected areas within Papua province include the World Heritage Lorentz National Park, and the Wasur National Park, a Ramsar wetland of international importance. Birdlife International has called Lorentz Park \"probably the single most important reserve in New Guinea\". It contains five of World Wildlife Fund's \"Global 200\" ecoregions: Southern New Guinea Lowland Forests; New Guinea Montane Forests; New Guinea Central Range Subalpine Grasslands; New Guinea mangroves; and New Guinea Rivers and Streams. Lorentz Park contains many unmapped and unexplored areas, and is certain to contain many species of plants and animals as yet unknown to Western science. Local communities' ethnobotanical and ethnozoological knowledge of the Lorentz biota is also very poorly documented. On the other hand, Wasur National Park has a very high value biodiversity has led to the park being dubbed the \"Serengeti of Papua\". About 70% of the total area of the park consists of savanna (see Trans-Fly savanna and grasslands), while the remaining vegetation is swamp forest, monsoon forest, coastal forest, bamboo forest, grassy plains and large stretches of sago swamp forest. The dominant plants include Mangroves, Terminalia, and Melaleuca species. The park provides habitat for a large variety of up to 358 bird species of which some 80 species are endemic to the island of New Guinea. Fish diversity is also high in the region with some 111 species found in the eco-region and a large number of these are recorded from Wasur. The park's wetland provides habitat for various species of lobster and crab as well.",
"title": "Environment"
},
{
"paragraph_id": 46,
"text": "Several parts of the province remains unexplored due to steep terrain, leaving a high possibility that there are still many undiscovered floras and faunas that is yet to be discovered. In February 2006, a team of scientists exploring the Foja Mountains, Sarmi, discovered new species of birds, butterflies, amphibians, and plants, including possibly the largest-flowered species of rhododendron. In December 2007, a second scientific expedition was taken to the mountain range. The expedition led to the discovery of two new species: the first being a 1.4 kg giant rat (Mallomys sp.) approximately five times the size of a regular brown rat, the second a pygmy possum (Cercartetus sp.) described by scientists as \"one of the world's smallest marsupials.\" An expedition late in 2008, backed by the Indonesian Institute of Sciences, National Geographic Society and Smithsonian Institution, was made in order to assess the area's biodiversity. New types of animals recorded include a frog with a long erectile nose, a large woolly rat, an imperial-pigeon with rust, grey and white plumage, a 25 cm gecko with claws rather than pads on its toes, and a small, 30 cm high, black forest wallaby (a member of the genus Dorcopsis).",
"title": "Environment"
},
{
"paragraph_id": 47,
"text": "Ecological threats include logging-induced deforestation, forest conversion for plantation agriculture (including oil palm), smallholder agricultural conversion, the introduction and potential spread of alien species such as the crab-eating macaque which preys on and competes with indigenous species, the illegal species trade, and water pollution from oil and mining operations.",
"title": "Environment"
},
{
"paragraph_id": 48,
"text": "Papua GDP share by sector (2005)",
"title": "Economy"
},
{
"paragraph_id": 49,
"text": "Papua is reported to be one of Indonesia's poorest regions. The province is rich in natural resources but has weaknesses namely in limited infrastructure and less skilled human resources. So far, Papua has had a fairly good economic development due to the support of economic sources, especially mining, forest, agriculture and fisheries products. Economic development has been uneven in Papua, and poverty in the region remains high by Indonesian standards. Part of the problem has been neglect of the poor—too little or the wrong kind of government support from Jakarta and Jayapura. A major factor in this is the extraordinarily high cost of delivering goods and services to large numbers of isolated communities, in the absence of a developed road or river network (the latter in contrast to Kalimantan) providing access to the interior and the highlands. Intermittent political and military conflict and tight security controls have also contributed to the problem but with the exception of some border regions and a few pockets in the highlands, this has not been the main factor contributing to underdevelopment.",
"title": "Economy"
},
{
"paragraph_id": 50,
"text": "Papua's gross domestic product grew at a faster rate than the national average until, and throughout the financial crisis of 1997–98. However, the differences are much smaller if mining is excluded from the provincial GDP. Given that most mining revenues were commandeered by the central government until the Special Autonomy Law was passed in 2001, provincial GDP without mining is most likely a better measure of Papuan GDP during the pre- and immediate post-crisis periods. On a per capita basis, the GDP growth rates for both Papua and Indonesia are lower than those for total GDP. However, the gap between per capita GDP and total GDP is larger for Papua than for Indonesia as a whole, reflecting Papua's high population growth rates.",
"title": "Economy"
},
{
"paragraph_id": 51,
"text": "Although Papua has experienced almost no growth in GDP, the situation is not as serious as one might think. It is true that the mining sector, dominated by Freeport Indonesia, has been declining over the last decade or so, leading to a fall in the value of exports. On the other hand, government spending and fixed capital investment have both grown, by well over 10 per cent per year, contributing to growth in sectors such as finance, construction, transport and communications, and trade, hotels and restaurants. With so many sectors still experiencing respectable levels of growth, the impact of the stagnant economy on the welfare of the population will probably be limited. It should also be remembered that mining is typically an enclave activity; its impact on the general public is fairly limited, regardless of whether it is booming or contracting.",
"title": "Economy"
},
{
"paragraph_id": 52,
"text": "Papua has depended heavily on natural resources, especially the mining, oil and gas sectors, since the mid-1970s. Although this is still the case, there have been some structural changes in the two provincial economies since the split in 2003. The contribution of mining to the economy of Papua province declined from 62 per cent in 2003 to 47 per cent in 2012. The shares of agriculture and manufacturing also fell, but that of utilities remained the same. A few other sectors, notably construction and services, increased their shares during the period. Despite these structural changes, the economy of Papua province continues to be dominated by the mining sector, and in particular by a single company, Freeport indonesia.",
"title": "Economy"
},
{
"paragraph_id": 53,
"text": "Mining is still and remains one of the dominant economic sector in Papua. The Grasberg Mine, the world's largest gold mine and second-largest copper mine, is located in the highlands near Puncak Jaya, the highest mountain in Papua and whole Indonesia. Grasberg Mine producing 1.063 billion pounds of copper, 1.061 million ounces gold and 2.9 million ounces silver. It has 19,500 employees operated by PT Freeport Indonesia (PT-FI) which used to be 90.64% owned by Freeport-McMoran (FCX). In August 2017, FCX announced that it will divest its ownership in PT-FI so that Indonesia owns 51%. In return the CoW will be replaced by a special license (IUPK) with mining rights to 2041 and FCX will build a new smelter by 2022.",
"title": "Economy"
},
{
"paragraph_id": 54,
"text": "Besides mining, there are at least three other important economic sectors (excluding the government sector) in the Papuan economy. The first is agriculture, particularly food crops, forestry and fisheries. Agriculture made up 10.4 per cent of provincial GDP in 2005 but grew at an average rate of only 0.1 per cent per annum in 2000–05. The second important sector is trade, hotels and restaurants, which contributed 4.0 per cent of provincial GDP in 2005. Within this sector, trade contributed most to provincial GDP. However, the subsector with the highest growth rate was hotels, which grew at 13.2 per cent per annum in 2000–05. The third important sector is transport and Communications, which contributed 3.4 per cent of provincial GDP in 2005. The sector grew at an average annual rate of 5.3 percent in 2000–05, slightly below the national level. Within the sector, sea transport, air transport and communications performed particularly well. The role of private enterprise in developing communications and air transport has become increasingly significant. Since private enterprise will only expand if businesspeople see good prospects to make a profit, this is certainly an encouraging development. At current rates of growth, the transport and communications sector could support the development of agriculture in Papua. However, so far, most of the growth in communications has been between the rapidly expanding urban areas of Jayapura, Timika, Merauke, and between them and the rest of Indonesia. Nevertheless, in the medium term, improved communication networks may create opportunities for Papua to shift from heavy dependence on the mining sector to greater reliance on the agricultural sector. With good international demand for palm oil anticipated in the medium term, production of this commodity could be expanded. However, the negative effects of deforestation on the local environment should be a major consideration in the selection of new areas for this and any other plantation crop. In 2011, Papuan caretaker governor Syamsul Arief Rivai claimed Papua's forests cover 42 million hectares with an estimated worth of Rp 700 trillion ($78 billion) and that if the forests were managed properly and sustainably, they could produce over 500 million cubic meters of logs per annum.",
"title": "Economy"
},
{
"paragraph_id": 55,
"text": "Manufacturing and banking make up a tiny proportion of the regional economy and experienced negative growth in 2000–05. Poor infrastructure and lack of human capital are the most likely reasons for the poor performance of manufacturing. In addition, the costs of manufacturing are typically very high in Papua, as they are in many other outer island regions of Indonesia. Both within Indonesia and in the world economy, Papua's comparative advantage will continue to lie in agriculture and natural resource-based industries for a long time to come. A more significant role for manufacturing is unlikely given the far lower cost of labor and better infrastructure in Java. But provided that there are substantial improvements in infrastructure and communications, over the longer term manufacturing can be expected to cluster around activities related to agriculture—for example, food processing.",
"title": "Economy"
},
{
"paragraph_id": 56,
"text": "Compared to other parts of Indonesia, the infrastructure in Papua is one of the most least developed, owing to its distance from the national capital Jakarta. Nevertheless, for the past few years, the central government has invested significant sums of money to build and improve the current infrastructure in the province. The infrastructure development efforts of the Ministry of Public Works and Housing in Papua have been very massive in the last 10 years. This effort is carried out to accelerate equitable development and support regional development in Papua. The main focus of infrastructure development in Papua is to improve regional connectivity, improve the quality of life through the provision of basic infrastructure and increase food security through the development of water resources infrastructure. The achievements and conditions of infrastructure development in Papua until 2017 have shown significant progress.",
"title": "Infrastructure"
},
{
"paragraph_id": 57,
"text": "Electricity distribution in the province as well as the whole country is operated and managed by the Perusahaan Listrik Negara (PLN). Originally, most Papuan villages do not have access to electricity. The Indonesia government through the Ministry of Energy and Mineral Resources, in the beginning of year 2016, introduced a program named \"Indonesia Terang\" or Bright Indonesia. The aimed of this program is to speed up Electrification Rate (ER) with priority to the six provinces at Eastern area of Indonesia including Papua Province. The target of Indonesian's ER by 2019 is 97%. While the Indonesian's national ER already high (88.30%) in 2015, Papua still the lowest ER (45.93%) among the provinces. The scenario to boost up ER in the Eastern area by connected the consumers at villages which not electrified yet to the new Renewable Energy sources.",
"title": "Infrastructure"
},
{
"paragraph_id": 58,
"text": "The percentage of household that were connected to the electricity in Papua (Electrification ratio/ER) is the lowest one among the provinces in Indonesia. Data from the Ministry of Energy and Mineral Resources shows that only Papua Province has ER level below 50% (45.93%) with the national average RE was 88.30%. High ER of more than 85% can be found in the rest of west area of the country. The main reason of lowest ER in Papua is a huge area with landlocked and mountain situation and low density population. Energy consumption in residential sector, 457 GWh in year 2014, contributes the electrification rate in Papua Province. But again, geographic and demographic obstacle made the electrical energy not well dispersed in Papua. The ER levels are usually higher in the coastal area but become low in the mountain area. These can be seen by the formation of new provinces in 2022: Papua Province has an ER of 89.22%, while the former regions of South Papua has an ER of 73.54%, Central Papua has an ER of 47.36%, and Highland Papua has an ER of 12.09%.",
"title": "Infrastructure"
},
{
"paragraph_id": 59,
"text": "All pipes water supply in the province is managed by the Papua Municipal Waterworks (Indonesian: Perusahaan Daerah Air Minum Papua – PDAM Papua ). The supply of clean water is one of the main problem faced by the province, especially during drought seasons. Papua has been named as the province with the worst sanitation in Indonesia, garnering a score of 45 while the national average is 75, due to unhealthy lifestyle habits and a lack of clean water. In response, the government has invested money to build the sufficient infrastructure to hold clean water. Several new dams are also being built by the government throughout the province.",
"title": "Infrastructure"
},
{
"paragraph_id": 60,
"text": "Achieving universal access to drinking water, sanitation and hygiene is essential to accelerating progress in the fields of health, education and poverty alleviation. In 2015, about a quarter of the population used basic sanitation facilities at home, while a third still practiced open defecation. The coverage of improved drinking water sources is much higher, both in households and schools. Inequality based on income and residence levels is stark, demonstrating the importance of integrating equity principles into policy and practice and expanding the coverage of community-based total sanitation programs.",
"title": "Infrastructure"
},
{
"paragraph_id": 61,
"text": "Papua is one of the larger province in Indonesia, but it has the least amount of telecommunications services due to geographic isolation. The deployment of service to the district and to the sub district is still not evenly distributed. The distribution of telecommunication services in Papua is still very uneven. This is indicated by the percentage of the number of telecommunication services and infrastructure whose distribution is centralized in certain areas such as Jayapura. Based on data, the Human Development Index in Papua increases every year but is not accompanied by an increase adequate number of telecommunication facilities.",
"title": "Infrastructure"
},
{
"paragraph_id": 62,
"text": "The Ministry of Communication and Information Technology through the Information Technology Accessibility Agency (BAKTI) has built around 9 base transceiver stations in remote areas of Papua, namely Puncak Jaya Regency and Mamberamo Raya Regency, to connect to internet access. In the early stages, the internet was prioritized to support the continuity of education, health and better public services. To realize connectivity in accordance with government priorities, the Ministry of Communication and Information is determined to reach all districts in the Papua region with high-speed internet networks by 2020. It is planned that all districts in Papua and West Papua will build a fast internet backbone network. There are 31 regencies that have new high-speed internet access to be built.",
"title": "Infrastructure"
},
{
"paragraph_id": 63,
"text": "In late 2019, the government announced the completion of the Palapa Ring project – a priority infrastructure project that aimed to provide access to 4G internet services to more than 500 regencies across Indonesia, Papua included. The project is estimated to have cost US$1.5 billion and comprises 35,000 km (21,747 miles) of undersea fiber-optic cables and 21,000 km (13,000 miles) of land cables, stretching from the westernmost city in Indonesia, Sabang to the easternmost town, Merauke, which is located in Papua. Additionally, the cables also transverse every district from the northernmost island Miangas to the southernmost island, Rote. Through the Palapa Ring, the government can facilitate a network capacity of up to 100 Gbit/s in even the most outlying regions of the country.",
"title": "Infrastructure"
},
{
"paragraph_id": 64,
"text": "So far, air routes have been a mainstay in Papua and West Papua provinces as a means of transporting people and goods, including basic necessities, due to inadequate road infrastructure conditions. This has resulted in high distribution costs which have also increased the prices of various staple goods, especially in rural areas. Therefore, the government is trying to reduce distribution costs by building the Trans-Papua Highway. As of 2016, the Trans-Papua highway that has been connected has reached 3,498 kilometers, with asphalt roads for 2,075 kilometers, while the rest are still dirt roads, and roads that have not been connected have reached 827 km. The development of the Trans-Papua highway will create connectivity between regions so that it can have an impact on the acceleration of economic growth in Papua and West Papua in the long term. Apart from the construction of the Trans-Papua highway, the government is also preparing for the first railway development project in Papua, which is currently entering the feasibility study phase. The said infrastructure funding for Papua is not insignificant. The need to connect all roads in Papua and West Papua is estimated at Rp. 12.5 trillion (US$870 million). In the 2016 State Budget, the government has also budgeted an additional infrastructure development fund of Rp. 1.8 trillion (US$126 million).",
"title": "Infrastructure"
},
{
"paragraph_id": 65,
"text": "Data from the Ministry of Public Works and Housing (KPUPR) states, the length of the Trans-Papua highway in Papua reaches 2,902 km. These include Merauke-Tanahmerah-Waropko (543 km), Waropko-Oksibil (136 km), Dekai-Oksibil (225 km), and Kenyam-Dekai (180 km). Then, Wamena-Habema-Kenyam-Mamug (295 km), Jayapura-Elelim-Wamena (585 km), Wamena-Mulia-Ilaga-Enarotali (466 km), Wagete-Timika (196 km), and Enarotali-Wagete-Nabire (285 km). As of 2020, only about 200–300 kilometers of the Trans-Papua highwat have not been connected.",
"title": "Infrastructure"
},
{
"paragraph_id": 66,
"text": "As in other provinces in Indonesia, Papua uses a dual carriageway with the left-hand traffic rule, and cities and towns such as Jayapura and Merauke provide public transportation services such as buses and taxis along with Gojek and Grab services. Currently, the Youtefa Bridge in Jayapura is the longest bridge in the province, with a total length of 732 metres (2,402 ft). The bridge cut the distance and travel time from Jayapura city center to Muara Tami district as well as Skouw State Border Post at Indonesia–Papua New Guinea border. The bridge construction was carried out by consortium of state-owned construction companies PT Pembangunan Perumahan Tbk, PT Hutama Karya (Persero), and PT Nindya Karya (Persero), with a total construction cost of IDR 1.87 trillion and support from the Ministry of Public Works and Housing worth IDR 1.3 trillion. The main span assembly of the Youtefa Bridge was not carried out at the bridge site, but at PAL Indonesia shipyard in Surabaya, East Java. Its production in Surabaya aims to improve safety aspects, improve welding quality, and speed up the implementation time to 3 months. This is the first time where the arch bridge is made elsewhere and then brought to the location. From Surabaya the bridge span, weighing 2000 tons and 112.5 m long, was sent by ship with a 3,200 kilometers journey in 19 days. Installation of the first span was carried out on 21 February 2018, while the second span was installed on 15 March 2018 with an installation time of approximately 6 hours. The bridge was inaugurated on 28 October 2019 by President Joko Widodo.",
"title": "Infrastructure"
},
{
"paragraph_id": 67,
"text": "A railway with a length of 205 km is being planned, which would connect the provincial capital Jayapura and Sarmi to the east. Further plans include connecting the railway to Sorong and Manokwari in West Papua. In total, the railway would have a length of 595 km, forming part of the Trans-Papua Railway. Construction of the railway is still in the planning stage. A Light Rapid Transport (LRT) connecting Jayapura and Sentani is also being planned.",
"title": "Infrastructure"
},
{
"paragraph_id": 68,
"text": "The geographical conditions of Papua which are hilly and have dense forests and do not have adequate road infrastructure, such as in Java or Sumatra, make transportation a major obstacle for local communities. Air transportation using airplanes is by far the most effective means of transportation and is needed most by the inhabitants of the island, although it is not cheap for it. A number of airlines are also scrambling to take advantage of the geographical conditions of the island by opening busy routes to and from a number of cities, both district and provincial capitals. If seen from the sufficient condition of the airport infrastructure, there are not a few airports that can be landed by jets like Boeing and Airbus as well as propeller planes such as ATR and Cessna.",
"title": "Infrastructure"
},
{
"paragraph_id": 69,
"text": "Sentani International Airport in Jayapura is the largest airport in the province, serving as the main gateway to the province from other parts of Indonesia. The air traffic is roughly divided between flights connecting to destinations within the Papua province and flights linking Papua to other parts of Indonesia. The airport connects Jayapura with other Indonesian cities such as Manado, Makassar, Surabaya and Jakarta, as well as towns within the province such as Biak, Timika and Merauke. Sentani International Airport is also the main base for several aviation organizations, including Associated Mission Aviation, Mission Aviation Fellowship, YAJASI and Tariku Aviation. The airport currently does not have any international flights, although there are plans to open new airline routes to neighboring Papua New Guinea in the future. Other medium-sized airports in the province are Mozes Kilangin Airport in Timika, Mopah International Airport in Merauke, Frans Kaisiepo International Airport in Biak, and Wamena Airport in Wamena. There are over 300 documented airstrips in Papua, consisting of mostly small airstrips that can only be landed by small airplanes. The government is planning to open more airports in the future to connect isolated regions in the province.",
"title": "Infrastructure"
},
{
"paragraph_id": 70,
"text": "Water transportation, which includes sea and river transportation, is also one of the most crucial form of transportation in the province, after air transportation. The number of passengers departing by sea in Papua in October 2019 decreased by 16.03 percent, from 18,785 people in September 2019 to 15,773 people. The number of passengers arriving by sea in October 2019 decreased by 12.32 percent, from 11,108 people in September 2019 to 9,739 people. The volume of goods loaded in October 2019 was recorded at 17,043 tons, an increase of 30.57 percent compared to the volume in September 2019 which amounted to 13,053 tons. The volume of goods unloaded in October 2019 was recorded at 117,906 tons or a decrease of 2.03 percent compared to the volume in September 2019 which amounted to 120,349 tons.",
"title": "Infrastructure"
},
{
"paragraph_id": 71,
"text": "There are several ports in the province, with the Port of Depapre in Jayapura being the largest, which started operation in 2021. There are also small to medium-sized ports in Biak, Timika, Merauke and Agats, which serves passenger and cargo ships within the province, as well as from other Indonesian provinces.",
"title": "Infrastructure"
},
{
"paragraph_id": 72,
"text": "Health-related matters in the Papua is administered by the Papua Provincial Health Agency (Indonesian: Dinas Kesehatan Provinsi Papua). According to the Indonesian Central Agency on Statistics, as of 2015, there are around 13,554 hospitals in Papua which consists of 226 state-owned hospitals and 13,328 private hospitals. Furthermore, there are 394 clinics spread throughout the province. The most prominent hospital is the Papua Regional General Hospital (Indonesian: Rumah Sakit Umum Daerah Papua) in Jayapura, which is the largest state-owned hospital in the province.",
"title": "Infrastructure"
},
{
"paragraph_id": 73,
"text": "Papua is reported to have the highest rates of child mortality and HIV/AIDS in Indonesia. Lack of good healthcare infrastructure is one of the main issues in Papua as of today, especially in the remote regions, as most hospitals that have adequate facilities are only located at major cities and towns. A measles outbreak and famine killed at least 72 people in Asmat regency in early 2018, during which 652 children were affected by measles and 223 suffered from malnutrition.",
"title": "Infrastructure"
},
{
"paragraph_id": 74,
"text": "Education in Papua, as well as Indonesia in a whole, falls under the responsibility of the Ministry of Education and Culture (Kementerian Pendidikan dan Kebudayaan or Kemdikbud) and the Ministry of Religious Affairs (Kementerian Agama or Kemenag). In Indonesia, all citizens must undertake twelve years of compulsory education which consists of six years at elementary level and three each at middle and high school levels. Islamic schools are under the responsibility of the Ministry of Religious Affairs. The Constitution also notes that there are two types of education in Indonesia: formal and non-formal. Formal education is further divided into three levels: primary, secondary and tertiary education. Indonesians are required to attend 12 years of school, which consists of three years of primary school, three years of secondary school and three years of high school.",
"title": "Infrastructure"
},
{
"paragraph_id": 75,
"text": "As of 2015, there are 3 public universities and 40 private universities in Papua. Public universities in Papua fall under the responsibility of the Ministry of Research and Technology (Kementerian Riset dan Teknologi) as well as the Ministry of Education and Culture. The most famous university in the province is the Cenderawasih University in Jayapura. The university has faculties in economics, law, teacher training and education, medical, engineering, and social and political science. Until 2002 the university had a faculty of agricultural sciences at Manokwari, which was then separated to form the Universitas Negeri Papua.",
"title": "Infrastructure"
},
{
"paragraph_id": 76,
"text": "While the Papuan branch of the Central Agency on Statistics had earlier projected the 2020 population of the province to be 3,435,430 people the actual census in 2020 revealed a total population of 4,303,707, spread throughout 28 regencies and 1 administrative city. The city of Jayapura is the most populated administrative division in the province, with a total of 398,478 people in 2020, while Supiori Regency, which comprises mainly the island of Supiori, one of the Schouten Islands within Cenderawasih Bay off the north coast of Papua, is the least populated administrative division in the province, with just 22,547 people. Most of the population in the province are concentrated in coastal regions, especially around the city of Jayapura and its suburbs. Papua is also home to many migrants from other parts of Indonesia, of which an overwhelming percentage of these migrants came as part of a government-sponsored transmigration program. The transmigration program in Papua was only formally halted by President Joko Widodo in June 2015.",
"title": "Demographics"
},
{
"paragraph_id": 77,
"text": "In contrast to other Indonesian provinces, which are mostly dominated by Austronesian peoples, Papua and West Papua as well as some part of Maluku are home to the Melanesians. The indigenous Papuans which are part of the Melanesians forms the majority of the population in the province. Many believe human habitation on the island dates to as early as 50,000 BC, and first settlement possibly dating back to 60,000 years ago has been proposed. The island of New Guinea is presently populated by almost a thousand different tribal groups and a near-equivalent number of separate languages, which makes it the most linguistically diverse area in the world. Current evidence indicates that the Papuans (who constitute the majority of the island's peoples) are descended from the earliest human inhabitants of New Guinea. These original inhabitants first arrived in New Guinea at a time (either side of the Last Glacial Maximum, approx 21,000 years ago) when the island was connected to the Australian continent via a land bridge, forming the landmass of Sahul. These peoples had made the (shortened) sea-crossing from the islands of Wallacea and Sundaland (the present Malay Archipelago) by at least 40,000 years ago.",
"title": "Demographics"
},
{
"paragraph_id": 78,
"text": "The ancestral Austronesian peoples are believed to have arrived considerably later, approximately 3,500 years ago, as part of a gradual seafaring migration from Southeast Asia, possibly originating in Taiwan. Austronesian-speaking peoples colonized many of the offshore islands to the north and east of New Guinea, such as New Ireland and New Britain, with settlements also on the coastal fringes of the main island in places. Human habitation of New Guinea over tens of thousands of years has led to a great deal of diversity, which was further increased by the later arrival of the Austronesians and the more recent history of European and Asian settlement.",
"title": "Demographics"
},
{
"paragraph_id": 79,
"text": "Papuan is also home to ethnic groups from other part of Indonesia, including the Javanese, Sundanese, Balinese, Batak, etc. Most of these migrants came as part of the transmigration program, which was an initiative of the Dutch colonial government and later continued by the Indonesian government to move landless people from densely populated areas of Indonesia to less populous areas of the country. The program was accused of fuelling marginalisation and discrimination of Papuans by migrants, and causing fears of the \"Javanisation\" or \"Islamisation\" of Papua. There is open conflict between migrants, the state, and indigenous groups due to differences in culture—particularly in administration, and cultural topics such as nudity, food and sex. The transmigration program in Papua was stopped in 2015 due to the controversies it had caused.",
"title": "Demographics"
},
{
"paragraph_id": 80,
"text": "Papua, the easternmost region of the Indonesian archipelago, exhibits a very complex linguistic network. The diversity of languages and the situation of multilingualism is very real. There are many language families scattered in this wide area, namely the Austronesian language family and numerous non-Austronesian languages known collectively as Papuan languages. Speakers of different Austronesian languages are found in coastal communities, such as Biak, Wandamen, Waropen and Ma'ya. On the other hand, Papuan languages are spoken in the interior and Central Highlands, starting from the Bird's Head Peninsula in the west to the eastern tip of the island of New Guinea, for example Meybrat, Dani, Ekari, Asmat, Muyu and Sentani language.",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "At this time, research efforts to find out how many indigenous languages in Papua are still being pursued. Important efforts regarding documentation and inventory of languages in Papua have also been carried out by two main agencies, namely SIL International and the Language and Book Development Agency in Jakarta. The results of the research that have been published by the two institutions show that there are differences in the number of regional languages in Papua. The Language and Book Development Agency as the official Indonesian government agency has announced or published that there are 207 different regional languages in Papua, while SIL International has stated that there are 271 regional languages in the region. Some of the regional languages of Papua are spoken by a large number of speakers and a wide spread area, some are supported by a small number of speakers and are scattered in a limited environment. However, until now it is estimated that there are still a number of regional languages in Papua that have not been properly studied so that it is not known what the form of the language is. In addition to local languages that have been listed by the two main institutions above, there are also dozens more languages from other islands due to population migration that is not included in the list of local languages in Papua, for example languages from Sulawesi (Bugis, Makassar, Toraja, Minahasa), Javanese from Java, and local languages from Maluku. So-called Papuan languages comprise hundreds of different languages, most of which are not related.",
"title": "Demographics"
},
{
"paragraph_id": 82,
"text": "As in other provinces, Indonesian is the official language of the state, as well as the province. Indonesian is used in inter-ethnic communication, usually between native Papuans and non-Papuan migrants who came from other parts of Indonesia. Most formal education, and nearly all national mass media, governance, administration, judiciary, and other forms of communication in Papua, are conducted in Indonesian. A Malay-based creole language called Papuan Malay is used as the lingua franca in the province. It emerged as a contact language among tribes in Indonesian New Guinea for trading and daily communication. Nowadays, it has a growing number of native speakers. More recently, the vernacular of Indonesian Papuans has been influenced by Standard Indonesian, the national standard dialect. Some linguists have suggested that Papuan Malay has its roots in North Moluccan Malay, as evidenced by the number of Ternate loanwords in its lexicon. Others have proposed that it is derived from Ambonese Malay. A large number of local languages are spoken in the province, and the need for a common lingua franca has been underlined by the centuries-old traditions of inter-group interaction in the form of slave-hunting, adoption, and intermarriage. It is likely that Malay was first introduced by the Biak people, who had contacts with the Sultanate of Tidore, and later, in the 19th century, by traders from China and South Sulawesi. However, Malay was probably not widespread until the adoption of the language by the Dutch missionaries who arrived in the early 20th century and were then followed in this practice by the Dutch administrators. The spread of Malay into the more distant areas was further facilitated by the Opleiding tot Dorpsonderwizer ('Education for village teacher') program during the Dutch colonial era. There are four varieties of Papuan Malay that can be identified, including Serui Malay. A variety of Papuan Malay is spoken in Vanimo, Papua New Guinea near the Indonesian border.",
"title": "Demographics"
},
{
"paragraph_id": 83,
"text": "Religion in Papua (2022)",
"title": "Demographics"
},
{
"paragraph_id": 84,
"text": "According to Indonesian Citizenship and Civil Registry in 2022, 70.15% of the Papuans identified themselves as Christians, with 64.68% being Protestants and 5.47% being Catholics. 29.56% of the population are Muslims and less than 1% were Buddhists or Hindus. There is also substantial practice of animism, the traditional religion for many Papuans, with many blending animistic beliefs with other religions such as Christianity and Islam. Christianity, including Protestantism and Roman Catholic are mostly adhered by native Papuans and migrants from Maluku, East Nusa Tenggara, North Sulawesi and Bataks of North Sumatra. Islam are mostly adhered by migrants from North Maluku, South Sulawesi (except Torajans), western Indonesia, and some native Papuans. Lastly Hinduism and Buddhism are mostly adhered by Balinese migrants and Chinese-Indonesians respectively.",
"title": "Demographics"
},
{
"paragraph_id": 85,
"text": "Islam had been present in Papua since the 15th century, because of interaction with Muslim traders and Moluccan Muslim Sultanates especially the earliest being Bacan. Though there were many earlier theories and folk legends on origin of Islam, sometimes mixed with indigenous folk religion of Fakfak, Kaimana, Bintuni, and Wondama. These include Islamic procession of Hajj pilgrimage that do not go to Meccah, but to Nabi Mountain, near Arguni Bay and Wondama Bay. According to Aceh origins, a Samudra Pasai figure called Tuan Syekh Iskandar Syah was sent to Mesia (Kokas) to preach in Nuu War (Papua), he converted a Papuan called Kriskris by teaching him about Alif Lam Ha (Allah) and Mim Ha Mim Dal (Muhammad), he became Imam and first king of Patipi, Fakfak. Syekh Iskandar brought with him some religious texts, which were copied onto Koba-Koba leaves and wood barks. Syekh Iskandar would return to Aceh bringing the original manuscripts, but before that he would visit Moluccas specifically in Sinisore village. This corresponds with the village's origin of Islam that instead came from Papua. A study by Fakfak government, mentioned another Acehnese figure called Abdul Ghafar who visited Old Fatagar in 1502 under the reign of Rumbati King Mansmamor. He would preach in Onin language (lingua franca of the area at the time) and was buried next to village mosque in Rumbati, Patipi Bay, Fakfak. Based on family account of Abdullah Arfan, the dynasty of Salawati Kingdom, in the 16th century the first Papuan Muslim was Kalewan who married Siti Hawa Farouk, a muballighah from Cirebon, and changed his name to Bayajid who became the ancestor of Arfan clan. Meanwhile, based on oral history of Fakfak and Kaimana, a Sufi by the name of Syarif Muaz al-Qathan from Yaman constructed a mosque in Tunasgain, which was dated using the 8 merbau woods previously used as ceremonial Alif poles for the mosque around every 50 years, to be from 1587. He was also attributed of converting Samay, an Adi Ruler of the royal line of Sran. Islam only grew in the coastal part of Papua especially in the bird head areas, and did not spread to the interior part of the island until Dutch started sending migrants in 1902 and exiled Indonesian leaders in 1910 to Merauke. Muhammadiyah figures were exiled in Papua and in their exile help spread Islam in the region. Later on to help members with education issues, Muhammadiyah only formally sent its teacher in 1933. Islam in the interior highland only spread after 1962, after interaction with teachers and migrants as was the case of Jayawijaya and the case of Dani tribe of Megapura. While in Wamena, conversion of Walesi village in 1977 was attributed to Jamaludin Iribaram, a Papuan teacher from Fakfak. Other smaller indigenous Islamic communities can also be found in Asmat, Yapen, Waropen, Biak, Jayapura, and Manowari.",
"title": "Demographics"
},
{
"paragraph_id": 86,
"text": "Missionaries Carl Ottow and Johann Geisler, under the initiative of Ottho Gerhard Heldring and permission from Tidore Sultanate, are the first Christian missionaries that reached Papua. They entered Papua at Mansinam Island, near Manokwari on 5 February 1855. Since 2001, the fifth of February has been a Papuan public holiday, recognizing this first landing. In 1863, sponsored by the Dutch colonial government, the Utrecht Mission Society (UZV) started a Christian-based education system as well as regular church services in Western New Guinea. Initially the Papuans' attendance was encouraged using bribes of betel nut and tobacco, but subsequently this was stopped. In addition, slaves were bought to be raised as step children and then freed. By 1880, only 20 Papuans had been baptized, including many freed slaves. The Dutch government established posts in Netherlands New Guinea in 1898, a move welcomed by the missionaries, who saw orderly Dutch rule as the essential antidote to Papua paganism. Subsequently, the UZV mission had more success, with a mass conversion near Cenderawasih Bay in 1907 and the evangelization of the Sentani people by Pamai, a native Papuan in the late 1920s. Due to the Great Depression, the mission suffered a funding shortfall, and switched to native evangelists, who had the advantage of speaking the local language (rather than Malay), but were often poorly trained. The mission extended in the 1930s to Yos Sudarso Bay, and the UZV mission by 1934 had over 50,000 Christians, 90% of them in North Papua, the remainder in West Papua. By 1942 the mission had expanded to 300 schools in 300 congregations. The first Catholic presence in Papua was in Fakfak, a Jesuit mission in 1894. In 1902 the Vicariate of Netherlands New Guinea was established. Despite the earlier activity in Fakfak, the Dutch restricted the Catholic Church to the southern part of the island, where they were active especially around Merauke. The mission campaigned against promiscuity and the destructive practices of headhunting among the Marind-anim. Following the 1918 flu pandemic, which killed one in five in the area, the Dutch government agreed to the establishment of model villages, based on European conditions, including wearing European clothes, but which the people would submit to only by violence. In 1925 the Catholics sought to re-establish their mission in Fakfak; permission was granted in 1927. This brought the Catholics into conflict with the Protestants in North Papua, who suggested expanding to South Papua in retaliation.",
"title": "Demographics"
},
{
"paragraph_id": 87,
"text": "The native Papuan people has a distinct culture and traditions that cannot be found in other parts of Indonesia. Coastal Papuans are usually more willing to accept modern influence into their daily lives, which in turn diminishes their original culture and traditions. Meanwhile, most inland Papuans still preserves their original culture and traditions, although their way of life over the past century are tied to the encroachment of modernity and globalization. Each Papuan tribe usually practices their own tradition and culture, which may differ greatly from one tribe to another.",
"title": "Culture"
},
{
"paragraph_id": 88,
"text": "The Ararem tradition is the tradition of delivering the dowry of a future husband to the family of the prospective wife in the Biak custom. In the Biak language, the word \"Ararem\" means dowry. In this procession, the bride and groom will be escorted on foot in a procession, accompanied by songs and dances accompanied by music and. The amount of the dowry is determined by the woman's family as agreed by her relatives. The date of submission of the dowry must be agreed upon by the family of the woman or the family of the prospective wife and the family of the man or family of the prospective husband. In the tradition of the Biak people, the payment of the dowry is a tradition that must be obeyed because it involves the consequences of a marriage.",
"title": "Culture"
},
{
"paragraph_id": 89,
"text": "There are a lot of traditional dances that are native to the province of Papua. Each Papuan tribe would usually have their own unique traditional dances.",
"title": "Culture"
},
{
"paragraph_id": 90,
"text": "The Yospan dance (Indonesian: Tarian Yospan) is a type of social association dance in Papua which is a traditional dance originating from the coastal regions of Papua, namely Biak, Yapen and Waropen, which are often played by the younger people as a form of friendship. Initially, the Yospan dance originated from two dances called Yosim and Pancar, which were eventually combined into one. Hence, Yospan is an acronym of Yosim and Pancar. When performing the Yosim dance, which originated from Yapen and Waropen, the dancers invited other residents to be immersed in the songs sung by a group of singers and music instrument holders. The musical instruments used are simple, which consists of ukulele and guitar, musical instruments that are not native to Papua. There is also a tool that functions as a bass with three ropes. The rope is usually made from rolled fibers, a type of pandanus leaf, which can be found in the forests of the coastal areas of Papua. A music instrument called Kalabasa is also played during the dance, it is made of dried Calabash, then filled with beads or small stones that are played by simply shaking it. The women dancers wear woven sarongs to cover their chests, decorative heads with flowers and bird feathers. Meanwhile, the male dancers would usually wear shorts, open chest, head also decorated with bird feathers. The Pancar dance that originated from Biak is only accompanied by a tifa, which is the traditional musical instrument of the coastal tribes in Papua.",
"title": "Culture"
},
{
"paragraph_id": 91,
"text": "The Isosolo dance is a type of dance performed by the inhabitants who lives around Lake Sentani in Jayapura. The Isosolo dance is performed to symbolize the harmony between different tribes in Papua. The art of boat dancing is a tradition of the Papuan people, especially among the Sentani people, where the dance is performed from one village to another. According to the Sentani language, Isosolo or Isolo dance is a traditional art of the Sentani people who dance on a boat on Lake Sentani. The word Isosolo consists of two words, iso and solo (or holo). Iso means to rejoice and dance to express feelings of the heart, while holo means a group or herd from all age groups who dance. Hence, isosolo means a group of people who dance with joy to express their feelings. The Isosolo dance in Sentani is usually performed by ondofolo (traditional leaders) and the village community to present a gift to other ondofolo. Items that are offered are items that are considered valuable, such as large wild boar, garden products, delivering ondofolo girls to be married, and several other traditional gifts. However, at this time, apart from being a form of respect for ondoafi, isosolo is considered more as a performance of the Sentani people's pride which is one of the popular attractions at the Lake Sentani Festival, which is held annually.",
"title": "Culture"
},
{
"paragraph_id": 92,
"text": "Each Papuan tribe usually has their own war dance. The Papuan war dance is one of the oldest dances of the Papuan people because this classical dance has been around for thousands of years and is even one of the legacies of Indonesia's prehistoric times. In Papuan culture, this dance is a symbol of how strong and brave the Papuan people are. Allegedly, this dance was once a part of traditional ceremonies when fighting other tribes.",
"title": "Culture"
},
{
"paragraph_id": 93,
"text": "Another traditional dance that is common to most if not all Papuan tribes is called musyoh. The emergence of the musyoh dance is based on a certain history. In ancient times, when a Papuan tribe member died due to an accident or something unexpected, the Papuan people believed that the spirit of the person who died was still roaming and unsettled. To overcome this, the Papuan tribesmen created a ritual in the form of the musyoh dance. Thus, this traditional dance is often referred to as a spirit exorcism dance. Generally, the musyoh dance is performed by men. However, besides the purpose of exorcising spirits, the musyoh dance is also used by the Papuan people for another purpose, such as welcoming guests. The musyoh dance is a symbol of respect, gratitude, and an expression of happiness in welcoming guests. If it is for the purpose of expelling the spirit, this musyoh dance is performed by men. In the case for welcoming guests, this dance is performed by men and women. The costumes worn by the dancers can be said to be very simple costumes. This simplicity can be seen from its very natural ingredients, namely processed tree bark and plant roots. The material is then used as a head covering, tops and bottoms, bracelets and necklaces. There are also unique scribbles on the dancers' bodies that show the uniqueness of the dance.",
"title": "Culture"
},
{
"paragraph_id": 94,
"text": "The kariwari is one of the traditional Papuan houses, more precisely the traditional house of the Tobati-Enggros people who live around Yotefa Bay and Lake Sentani near Jayapura. Unlike other forms of Papuan traditional houses, such as the round honai, the kariwari is usually constructed in the shape of an octagonal pyramid. Kariwari are usually made of, bamboo, iron wood and forest sago leaves. The Kariwari house consists of two floors and three rooms or three rooms, each with different functions. The kariwari is not like a honai that can be lived in by anyone, it cannot even be the residence of a tribal chief – unlike the honai which has political and legal functions. The kariwari is more specific as a place of education and worship, therefore the position of the Kariwari in the community of the Tobati-Enggros people is considered a sacred and holy place. Like traditional houses in general, the kariwari also has a design that is full of decorative details that make it unique, of course, the decorations are related to Papuan culture. especially from the Tobati-Enggros. The decorations found in the kariwari are usually in the form of works of art, among others; paintings, carvings and also sculptures. Apart from being decorated with works of art, the kariwari is also decorated with various weapons, such as; bow and arrow. There are also some skeletons of prey animals, usually in the form of wild boar fangs, kangaroo skeletons, turtle or turtle shells, birds-of-paradise, and so on.",
"title": "Culture"
},
{
"paragraph_id": 95,
"text": "Rumsram is the traditional house of the Biak Numfor people on the northern coast of Papua. This house was originally intended for men, while women were prohibited from entering or approaching it. Its function is similar to the kariwari, namely as a place for activities in teaching and educating men who are starting to be teenagers, in seeking life experiences. The building is square with a roof in the shape of an upside down boat because of the background of the Biak Numfor tribe who work as sailors. The materials used are bark for floors, split and chopped water bamboo for walls, while the roof is made of dried sago leaves. The walls are made of sago leaves. The original rumsram wall only had a few windows and its position was at the front and back. A rumsram usually has a height of approximately 6–8 m and is divided into two parts, differentiated by floor levels. The first floor is open and without walls. Only the building columns were visible. In this place, men are educated to learn sculpting, shielding, boat building, and war techniques. In a traditional ceremony called Wor Kapanaknik, which in the Biak language means \"to shave a child's hair\", a traditional ritual is usually carried out when boys are 6–8 years old. The age when a child is considered to be able to think and the child has started to get education in the search for life experiences, as well as how to become a strong and responsible man as the head of the family later. The children would then enter a rumsram, hence the rite of passage is also called rumsram, because the ritual are carried out in the rumsram.",
"title": "Culture"
},
{
"paragraph_id": 96,
"text": "The cuscus bone skewer is a traditional Papuan weapon used by one of the indigenous Papuan tribes, namely the Bauzi people. The Bauzi people still maintains their tradition of hunting and gathering. The weapon they use to hunt animals while waiting for the harvest to arrive is a piercing tool made of cuscus bones. The use of cuscus bones as a traditional weapon is very environmentally friendly. This happens because in its manufacture, it does not require the help of industrial equipment that pollutes the environment. This traditional weapon is made from cleaned cuscus bone (before the meat is eaten and separated from the bone), sharpened by rubbing it with a whetstone, and repeated so that the desired sharpness is formed.",
"title": "Culture"
},
{
"paragraph_id": 97,
"text": "Papuan knife blades are usually used for slashing or cutting when hunting animals in the forest. Even though the animals they face are large mammals and crocodiles, the Papuan people still adhere to prevailing customs. The custom is that it is not permissible to use any kind of firearm when hunting. Papuan Daggers are knives made of unique materials and are difficult to obtain in other areas, namely the bones of an endemic animal to Papua, the cassowary. Cassowary bones are used by local culture to become a tool that has beneficial values for life. Apart from that, the feathers attached to the blade's handle are also the feathers of the cassowary.",
"title": "Culture"
},
{
"paragraph_id": 98,
"text": "The Papuan spear is referred to by the local community of Sentani as Mensa. The spear was a weapon that could be used for both fighting and hunting. In addition, Papuan culture often uses the spear as a property in dances. The weapons mentioned above are made from basic materials that are easily found in nature. Wood to make the handle, and a river stone that was sharpened as a spearhead. For that reason, the spear is able to survive as a weapon that must be present in hunting and fighting activities. What makes this traditional Papuan weapon feel special is that there is a rule not to use a spear other than for hunting and fighting purposes. For example, it is forbidden to cut young tree shoots with a spear, or to use a spear to carry garden produce. If this rule was broken, the person who wielded this spear would have bad luck. Meanwhile, in the manufacturing process, this spear frame takes a long time. Starting from the wood taken from the tree kayu swang with the diameter of 25 cm. After drying it in the sun, the wood is split to four and shaped so it has rounded cross-section, then the tip is shaped until it formed two-sided and leaf shaped spear-tip.",
"title": "Culture"
},
{
"paragraph_id": 99,
"text": "The bow and arrow is a traditional Papuan weapon locally in Sentani called Fela that has uses for hunting wild boar and other animals. The arrowheads is made from bark of sago tree, the bow is made from a type of wild betel nut tree which can also be made the arrowheads, the shaft is made from a type of grass, small sized bamboo which do not have cavity and rattan as the bowstring. Depending on the phase of for battle there are variety of arrow type, Hiruan is a plain sharp arrow with no decoration to lure the enemy; Humbai is a sharp arrow which have one serrated sided tip and the other plain, used to shoot seen enemy that is getting closer; Hube is an arrow with both sides serrated, used for enemy that is getting closer still; Humame is an arrow with three sided serrated tip, used for a really close enemy; Hukeli is an arrow with four-sided serrated arrowhead, used only after Humame depleted; Pulung Waliman is an arrow with two-sided arrowhead, with three large teeth, and hole in the middle, only used to kill enemy chieftain. In addition, for hunting three kinds of arrows are used, Hiruan which have similar characteristic as war Hiruan other than different shape; Maigue is an arrow with two-pronged tip; and Ka'ai is an arrow with three-pronged tip.",
"title": "Culture"
},
{
"paragraph_id": 100,
"text": "The Papuan parang called Yali made from old swang wood, take 2–3 days to make and can be made before or after drying the wood. It can be used for household purposes, namely cooking, cutting meat, cutting vegetables and cutting down sago. In addition, Papuan machetes are also used in the agricultural industry and be used as a collection. Usually it will have carving symbolizing prosperity for humans or prosperity for animals.",
"title": "Culture"
},
{
"paragraph_id": 101,
"text": "Papuan oars are traditional Papuan tools called Roreng for males and Biareng for females. They are made from swang wood and the bark of sago trees. The wood was split to create flat surface and then shaped like an oar, with the tip made thinner and sharper. It primarily functioned as an oar to propel canoes forward, but under attack from enemies from the seas it can be used as spear because of its sharp tip. Usually oars have ornamental engravings shaped like a finger called Hiokagema to symbolize unity of strength of ten fingers to power the oars.",
"title": "Culture"
},
{
"paragraph_id": 102,
"text": "Papuan Stone Axes from Sentani are called Mamehe usually made from river stones secured to the handle with rattan. Usually it was made from batu pualan (marble) which was then shaped with another stone by chipping slowly. According local tradition the making of the stone have to be done secretly from the family, and can take up to 2 months. For the handle it was constructed using swang wood or ironwood. One part was to secure the axe head and another for the handle, with all parts tied together using rattan. the axe are usually made for cutting down trees and canoes building, however currently used more often as collections.",
"title": "Culture"
},
{
"paragraph_id": 103,
"text": "Tifa is a traditional Papuan musical instrument that is played by beating. Unlike those from Maluku, this musical instrument from Papua is usually longer and has a handle on one part of the instrument. Meanwhile, the tifa from Maluku has a wide size and there is no handle on the side. The material used also comes from the strongest wood, usually the type of Lenggua wood (Pterocarpus indicus) with animal skin as the upper membrane. The animal's skin is tied with rattan in a circle so that it is tight and can produce a beautiful sound. In addition, on the body part of the musical instrument there is a typical Papuan carving. Tifa is usually used to accompany guest welcoming events, traditional parties, dances, etc. The size of the sound that comes out of the drum depends on the size of the instrument. Apart from being a means of accompanying the dance, the tifa also has a social meaning based on the function and shape of the carved ornaments on the body of the tifa. In the culture of the Marind-Anim people in Merauke, each clan has its own shape and motif as well as a name for each tifa. The same goes for the Biak and Waropen people.",
"title": "Culture"
},
{
"paragraph_id": 104,
"text": "The triton is a traditional Papuan musical instrument that is played by blowing it. This musical instrument is found throughout the coast, especially in the Biak, Yapen, Waropen and Nabire. Initially, this tool was only used as a means of communication or as a means of calling and signaling. Currently this instrument is also used as a means of entertainment and traditional musical instruments.",
"title": "Culture"
},
{
"paragraph_id": 105,
"text": "The native Papuan food usually consists of roasted boar with Tubers such as sweet potato. The staple food of Papua and eastern Indonesia in general is sago, as the counterpart of central and western Indonesian cuisines that favour rice as their staple food. Sago is either processed as a pancake or sago congee called papeda, usually eaten with yellow soup made from tuna, red snapper or other fishes spiced with turmeric, lime, and other spices. On some coasts and lowlands on Papua, sago is the main ingredient to all the foods. Sagu bakar, sagu lempeng, and sagu bola, has become dishes that is well known to all Papua, especially on the custom folk culinary tradition on Mappi, Asmat and Mimika. Papeda is one of the sago foods that is rarely found. As Papua is considered as a non-Muslim majority regions, pork is readily available everywhere. In Papua, pig roast which consists of pork and yams are roasted in heated stones placed in a hole dug in the ground and covered with leaves; this cooking method is called bakar batu (burning the stone), and it is an important cultural and social event among Papuan people.",
"title": "Culture"
},
{
"paragraph_id": 106,
"text": "In the coastal regions, seafood is the main food for the local people. One of the famous sea foods from Papua is fish wrap (Indonesian: Ikan Bungkus). Wrapped fish in other areas is called Pepes ikan. Wrapped fish from Papua is known to be very fragrant. This is because there are additional bay leaves so that the mixture of spices is more fragrant and soaks into the fish meat. The basic ingredient of Papuan wrapped fish is sea fish, the most commonly used fish is milkfish. Milkfish is suitable for \"wrap\" because it has meat that does not crumble after processing. The spices are sliced or cut into pieces, namely, red and bird's eye chilies, bay leaves, tomatoes, galangal, and lemongrass stalks. While other spices are turmeric, garlic and red, red chilies, coriander, and hazelnut. The spices are first crushed and then mixed or smeared on the fish. The wrapping is in banana leaves.",
"title": "Culture"
},
{
"paragraph_id": 107,
"text": "Common Papuan snacks are usually made out of sago. Kue bagea (also called sago cake) is a cake originating from Ternate in North Maluku, although it can also be found in Papua. It has a round shape and creamy color. Bagea has a hard consistency that can be softened in tea or water, to make it easier to chew. It is prepared using sago, a plant-based starch derived from the sago palm or sago cycad. Sagu Lempeng is a typical Papuan snacks that is made in the form of processed sago in the form of plates. Sagu Lempeng are also a favorite for travelers. But it is very difficult to find in places to eat because this bread is a family consumption and is usually eaten immediately after cooking. Making sago plates is as easy as making other breads. Sago is processed by baking it by printing rectangles or rectangles with iron which is ripe like white bread. Initially tasteless, but recently it has begun to vary with sugar to get a sweet taste. It has a tough texture and can be enjoyed by mixing it or dipping it in water to make it softer. Sago porridge is a type of porridge that are found in Papua. This porridge is usually eaten with yellow soup made of mackerel or tuna then seasoned with turmeric and lime. Sago porridge is sometimes also consumed with boiled tubers, such as those from cassava or sweet potato. Vegetable papaya flowers and sautéed kale are often served as side dishes to accompany the sago porridge. In the inland regions, Sago worms are usually served as a type of snack dish. Sago worms come from sago trunks which are cut and left to rot. The rotting stems cause the worms to come out. The shape of the sago worms varies, ranging from the smallest to the largest size of an adult's thumb. These sago caterpillars are usually eaten alive or cooked beforehand, such as stir-frying, cooking, frying and then skewered. But over time, the people of Papua used to process these sago caterpillars into sago caterpillar satay. To make satay from this sago caterpillar, the method is no different from making satay in general, namely on skewers with a skewer and grilled over hot coals.",
"title": "Culture"
}
]
| Papua is a province of Indonesia, comprising the northern coast of Western New Guinea together with island groups in Cenderawasih Bay to the west. It roughly follows the borders of Papuan customary region of Tabi Saireri. It is bordered by the sovereign state of Papua New Guinea to the east, the Pacific Ocean to the north, Cenderawasih Bay to the west, and the provinces of Central Papua and Highland Papua to the south. The province also shares maritime boundaries with Palau in the Pacific. Following the splitting off of twenty regencies to create the three new provinces of Central Papua, Highland Papua, and South Papua on 30 June 2022, the residual province is divided into eight regencies (kabupaten) and one city (kota), the latter being the provincial capital of Jayapura. The province has a large potential in natural resources, such as gold, nickel, petroleum, etc. Papua, along with five other Papuan provinces, has a higher degree of autonomy level compared to other Indonesian provinces. The island of New Guinea has been populated for tens of thousands of years. European traders began frequenting the region around the late 16th century due to spice trade. In the end, the Dutch Empire emerged as the dominant leader in the spice war, annexing the western part of New Guinea into the colony of Dutch East Indies. The Dutch remained in New Guinea until 1962, even though other parts of the former colony has declared independence as the Republic of Indonesia in 1945. Following negotiations and conflicts with the Indonesian government, the Dutch transferred Western New Guinea to a United Nations Temporary Executive Authority (UNTEA), which was again transferred to Indonesia after the controversial Act of Free Choice. The province was formerly called Irian Jaya and comprised the entire Western New Guinea until the inauguration of the province of West Papua in 2001. In 2002, Papua adopted its current name and was granted a special autonomous status under Indonesian legislation. The province of Papua remains one of the least developed provinces in Indonesia. As of 2020, Papua has a GDP per capita of Rp 56.1 million, ranking 11th place among all Indonesian provinces. However, Papua only has a Human Development Index of 0.604, the lowest among all Indonesian provinces. The harsh New Guinean terrain and climate is one of the main reasons why infrastructure in Papua is considered to be the most challenging to be developed among other Indonesian regions. The 2020 census revealed a population of 4,303,707, of which the majority were Christian. The official estimate for mid 2022 was 4,418,581 prior to the division of the province into four separate provinces. The official estimate of the population in mid 2022 of the reduced province was 1,034,956. The interior is predominantly populated by ethnic Papuans while coastal towns are inhabited by descendants of intermarriages between Papuans, Melanesians and Austronesians, including other Indonesian ethnic groups. Migrants from the rest of Indonesia also tend to inhabit the coastal regions. The province is also home to some uncontacted peoples. | 2001-11-13T20:04:24Z | 2023-12-27T05:59:37Z | [
"Template:Infobox settlement",
"Template:Main articles",
"Template:Lang",
"Template:Cite thesis",
"Template:Cite press release",
"Template:Pie chart",
"Template:Official website",
"Template:Use dmy dates",
"Template:Update",
"Template:Sfn",
"Template:Flagicon image",
"Template:Reflist",
"Template:Cite news",
"Template:Citation",
"Template:Cite conference",
"Template:Main",
"Template:Webarchive",
"Template:Commons category",
"Template:Portal bar",
"Template:Authority control",
"Template:Cite journal",
"Template:Navboxes",
"Template:Short description",
"Template:Fontcolor",
"Template:Cite web",
"Template:Harvp",
"Template:Papua",
"Template:Provinces of Indonesia",
"Template:Quote box",
"Template:Cite book",
"Template:Citation needed",
"Template:Lit",
"Template:ISBN"
]
| https://en.wikipedia.org/wiki/Papua_(province) |
15,200 | IMF (disambiguation) | The IMF, or International Monetary Fund is an international organization.
IMF may also refer to: | [
{
"paragraph_id": 0,
"text": "The IMF, or International Monetary Fund is an international organization.",
"title": ""
},
{
"paragraph_id": 1,
"text": "IMF may also refer to:",
"title": ""
}
]
| The IMF, or International Monetary Fund is an international organization. IMF may also refer to: | 2022-10-19T14:57:04Z | [
"Template:Wiktionary",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/IMF_(disambiguation) |
|
15,201 | Interdisciplinarity | Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several other fields like sociology, anthropology, psychology, economics, etc. It is about creating something by thinking across boundaries. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings.
The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields.
The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics.
Although "interdisciplinary" and "interdisciplinarity" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that "the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology.
Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies.
At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another.
Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in "softer" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those discipline. From the disciplinary perspective, however, much interdisciplinary work may be seen as "soft", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure.
Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds.
Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as "interdisciplines". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into "professional", "organizational", and "cultural" obstacles.
An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas.
An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge.
In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'.
Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions.
While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account.
Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones.
At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia.
There are many examples of when a particular idea, almost on the same period, arises in different disciplines. One case is the shift from the approach of focusing on "specialized segments of attention" (adopting one particular perspective), to the idea of "instant sensory awareness of the whole", an attention to the "total field", a "sense of the whole pattern, of form and function as a unity", an "integral idea of structure and configuration". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity.
An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary:
To begin with, a discipline can be conveniently defined as any comparatively self-contained and isolated domain of human experience which possesses its own community of experts. Interdisciplinarity is best seen as bringing together distinctive components of two or more disciplines. In academic discourse, interdisciplinarity typically applies to four realms: knowledge, research, education, and theory. Interdisciplinary knowledge involves familiarity with components of two or more disciplines. Interdisciplinary research combines components of two or more disciplines in the search or creation of new knowledge, operations, or artistic expressions. Interdisciplinary education merges components of two or more disciplines in a single program of instruction. Interdisciplinary theory takes interdisciplinary knowledge, research, or education as its main objects of study.
In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the "distance" between them, the novelty of any particular combination, and their extent of integration.
Interdisciplinary knowledge and research are important because:
"The modern mind divides, specializes, thinks in categories: the Greek instinct was the opposite, to take the widest view, to see things as an organic whole [...]. The Olympic games were designed to test the arete of the whole man, not a merely specialized skill [...]. The great event was the pentathlon, if you won this, you were a man. Needless to say, the Marathon race was never heard of until modern times: the Greeks would have regarded it as a monstrosity."
"Previously, men could be divided simply into the learned and the ignorant, those more or less the one, and those more or less the other. But your specialist cannot be brought in under either of these two categories. He is not learned, for he is formally ignorant of all that does not enter into his specialty; but neither is he ignorant, because he is 'a scientist,' and 'knows' very well his own tiny portion of the universe. We shall have to say that he is a learned ignoramus, which is a very serious matter, as it implies that he is a person who is ignorant, not in the fashion of the ignorant man, but with all the petulance of one who is learned in his own special line."
"It is the custom among those who are called 'practical' men to condemn any man capable of a wide survey as a visionary: no man is thought worthy of a voice in politics unless he ignores or does not know nine-tenths of the most important relevant facts." | [
{
"paragraph_id": 0,
"text": "Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity (e.g., a research project). It draws knowledge from several other fields like sociology, anthropology, psychology, economics, etc. It is about creating something by thinking across boundaries. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term \"interdisciplinary\" is sometimes confined to academic settings.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although \"interdisciplinary\" and \"interdisciplinarity\" are frequently viewed as twentieth century terms, the concept has historical antecedents, most notably Greek philosophy. Julie Thompson Klein attests that \"the roots of the concepts lie in a number of ideas that resonate through modern discourse—the ideas of a unified science, general knowledge, synthesis and the integration of knowledge\", while Giles Gunn says that Greek historians and dramatists took elements from other realms of knowledge (such as medicine or philosophy) to further understand their own material. The building of Roman roads required men who understood surveying, material science, logistics and several other disciplines. Any broadminded humanist project involves interdisciplinarity, and history shows a crowd of cases, as seventeenth-century Leibniz's task to create a system of universal justice, which required linguistics, economics, management, ethics, law philosophy, politics, and even sinology.",
"title": "Development"
},
{
"paragraph_id": 4,
"text": "Interdisciplinary programs sometimes arise from a shared conviction that the traditional disciplines are unable or unwilling to address an important problem. For example, social science disciplines such as anthropology and sociology paid little attention to the social analysis of technology throughout most of the twentieth century. As a result, many social scientists with interests in technology have joined science, technology and society programs, which are typically staffed by scholars drawn from numerous disciplines. They may also arise from new research developments, such as nanotechnology, which cannot be addressed without combining the approaches of two or more disciplines. Examples include quantum information processing, an amalgamation of quantum physics and computer science, and bioinformatics, combining molecular biology with computer science. Sustainable development as a research area deals with problems requiring analysis and synthesis across economic, social and environmental spheres; often an integration of multiple social and natural science disciplines. Interdisciplinary research is also key to the study of health sciences, for example in studying optimal solutions to diseases. Some institutions of higher education offer accredited degree programs in Interdisciplinary Studies.",
"title": "Development"
},
{
"paragraph_id": 5,
"text": "At another level, interdisciplinarity is seen as a remedy to the harmful effects of excessive specialization and isolation in information silos. On some views, however, interdisciplinarity is entirely indebted to those who specialize in one field of study—that is, without specialists, interdisciplinarians would have no information and no leading experts to consult. Others place the focus of interdisciplinarity on the need to transcend disciplines, viewing excessive specialization as problematic both epistemologically and politically. When interdisciplinary collaboration or research results in new solutions to problems, much information is given back to the various disciplines involved. Therefore, both disciplinarians and interdisciplinarians may be seen in complementary relation to one another.",
"title": "Development"
},
{
"paragraph_id": 6,
"text": "Because most participants in interdisciplinary ventures were trained in traditional disciplines, they must learn to appreciate differences of perspectives and methods. For example, a discipline that places more emphasis on quantitative rigor may produce practitioners who are more scientific in their training than others; in turn, colleagues in \"softer\" disciplines who may associate quantitative approaches with difficulty grasp the broader dimensions of a problem and lower rigor in theoretical and qualitative argumentation. An interdisciplinary program may not succeed if its members remain stuck in their disciplines (and in disciplinary attitudes). Those who lack experience in interdisciplinary collaborations may also not fully appreciate the intellectual contribution of colleagues from those discipline. From the disciplinary perspective, however, much interdisciplinary work may be seen as \"soft\", lacking in rigor, or ideologically motivated; these beliefs place barriers in the career paths of those who choose interdisciplinary work. For example, interdisciplinary grant applications are often refereed by peer reviewers drawn from established disciplines; interdisciplinary researchers may experience difficulty getting funding for their research. In addition, untenured researchers know that, when they seek promotion and tenure, it is likely that some of the evaluators will lack commitment to interdisciplinarity. They may fear that making a commitment to interdisciplinary research will increase the risk of being denied tenure.",
"title": "Barriers"
},
{
"paragraph_id": 7,
"text": "Interdisciplinary programs may also fail if they are not given sufficient autonomy. For example, interdisciplinary faculty are usually recruited to a joint appointment, with responsibilities in both an interdisciplinary program (such as women's studies) and a traditional discipline (such as history). If the traditional discipline makes the tenure decisions, new interdisciplinary faculty will be hesitant to commit themselves fully to interdisciplinary work. Other barriers include the generally disciplinary orientation of most scholarly journals, leading to the perception, if not the fact, that interdisciplinary research is hard to publish. In addition, since traditional budgetary practices at most universities channel resources through the disciplines, it becomes difficult to account for a given scholar or teacher's salary and time. During periods of budgetary contraction, the natural tendency to serve the primary constituency (i.e., students majoring in the traditional discipline) makes resources scarce for teaching and research comparatively far from the center of the discipline as traditionally understood. For these same reasons, the introduction of new interdisciplinary programs is often resisted because it is perceived as a competition for diminishing funds.",
"title": "Barriers"
},
{
"paragraph_id": 8,
"text": "Due to these and other barriers, interdisciplinary research areas are strongly motivated to become disciplines themselves. If they succeed, they can establish their own research funding programs and make their own tenure and promotion decisions. In so doing, they lower the risk of entry. Examples of former interdisciplinary research areas that have become disciplines, many of them named for their parent disciplines, include neuroscience, cybernetics, biochemistry and biomedical engineering. These new fields are occasionally referred to as \"interdisciplines\". On the other hand, even though interdisciplinary activities are now a focus of attention for institutions promoting learning and teaching, as well as organizational and social entities concerned with education, they are practically facing complex barriers, serious challenges and criticism. The most important obstacles and challenges faced by interdisciplinary activities in the past two decades can be divided into \"professional\", \"organizational\", and \"cultural\" obstacles.",
"title": "Barriers"
},
{
"paragraph_id": 9,
"text": "An initial distinction should be made between interdisciplinary studies, which can be found spread across the academy today, and the study of interdisciplinarity, which involves a much smaller group of researchers. The former is instantiated in thousands of research centers across the US and the world. The latter has one US organization, the Association for Interdisciplinary Studies (founded in 1979), two international organizations, the International Network of Inter- and Transdisciplinarity (founded in 2010) and the Philosophy of/as Interdisciplinarity Network (founded in 2009). The US's research institute devoted to the theory and practice of interdisciplinarity, the Center for the Study of Interdisciplinarity at the University of North Texas, was founded in 2008 but is closed as of 1 September 2014, the result of administrative decisions at the University of North Texas.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 10,
"text": "An interdisciplinary study is an academic program or process seeking to synthesize broad perspectives, knowledge, skills, interconnections, and epistemology in an educational setting. Interdisciplinary programs may be founded in order to facilitate the study of subjects which have some coherence, but which cannot be adequately understood from a single disciplinary perspective (for example, women's studies or medieval studies). More rarely, and at a more advanced level, interdisciplinarity may itself become the focus of study, in a critique of institutionalized disciplines' ways of segmenting knowledge.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 11,
"text": "In contrast, studies of interdisciplinarity raise to self-consciousness questions about how interdisciplinarity works, the nature and history of disciplinarity, and the future of knowledge in post-industrial society. Researchers at the Center for the Study of Interdisciplinarity have made the distinction between philosophy 'of' and 'as' interdisciplinarity, the former identifying a new, discrete area within philosophy that raises epistemological and metaphysical questions about the status of interdisciplinary thinking, with the latter pointing toward a philosophical practice that is sometimes called 'field philosophy'.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 12,
"text": "Perhaps the most common complaint regarding interdisciplinary programs, by supporters and detractors alike, is the lack of synthesis—that is, students are provided with multiple disciplinary perspectives but are not given effective guidance in resolving the conflicts and achieving a coherent view of the subject. Others have argued that the very idea of synthesis or integration of disciplines presupposes questionable politico-epistemic commitments. Critics of interdisciplinary programs feel that the ambition is simply unrealistic, given the knowledge and intellectual maturity of all but the exceptional undergraduate; some defenders concede the difficulty, but insist that cultivating interdisciplinarity as a habit of mind, even at that level, is both possible and essential to the education of informed and engaged citizens and leaders capable of analyzing, evaluating, and synthesizing information from multiple sources in order to render reasoned decisions.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 13,
"text": "While much has been written on the philosophy and promise of interdisciplinarity in academic programs and professional practice, social scientists are increasingly interrogating academic discourses on interdisciplinarity, as well as how interdisciplinarity actually works—and does not—in practice. Some have shown, for example, that some interdisciplinary enterprises that aim to serve society can produce deleterious outcomes for which no one can be held to account.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 14,
"text": "Since 1998, there has been an ascendancy in the value of interdisciplinary research and teaching and a growth in the number of bachelor's degrees awarded at U.S. universities classified as multi- or interdisciplinary studies. The number of interdisciplinary bachelor's degrees awarded annually rose from 7,000 in 1973 to 30,000 a year by 2005 according to data from the National Center of Educational Statistics (NECS). In addition, educational leaders from the Boyer Commission to Carnegie's President Vartan Gregorian to Alan I. Leshner, CEO of the American Association for the Advancement of Science have advocated for interdisciplinary rather than disciplinary approaches to problem-solving in the 21st century. This has been echoed by federal funding agencies, particularly the National Institutes of Health under the direction of Elias Zerhouni, who has advocated that grant proposals be framed more as interdisciplinary collaborative projects than single-researcher, single-discipline ones.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 15,
"text": "At the same time, many thriving longstanding bachelor's in interdisciplinary studies programs in existence for 30 or more years, have been closed down, in spite of healthy enrollment. Examples include Arizona International (formerly part of the University of Arizona), the School of Interdisciplinary Studies at Miami University, and the Department of Interdisciplinary Studies at Wayne State University; others such as the Department of Interdisciplinary Studies at Appalachian State University, and George Mason University's New Century College, have been cut back. Stuart Henry has seen this trend as part of the hegemony of the disciplines in their attempt to recolonize the experimental knowledge production of otherwise marginalized fields of inquiry. This is due to threat perceptions seemingly based on the ascendancy of interdisciplinary studies against traditional academia.",
"title": "Interdisciplinary studies and studies of interdisciplinarity"
},
{
"paragraph_id": 16,
"text": "There are many examples of when a particular idea, almost on the same period, arises in different disciplines. One case is the shift from the approach of focusing on \"specialized segments of attention\" (adopting one particular perspective), to the idea of \"instant sensory awareness of the whole\", an attention to the \"total field\", a \"sense of the whole pattern, of form and function as a unity\", an \"integral idea of structure and configuration\". This has happened in painting (with cubism), physics, poetry, communication and educational theory. According to Marshall McLuhan, this paradigm shift was due to the passage from an era shaped by mechanization, which brought sequentiality, to the era shaped by the instant speed of electricity, which brought simultaneity.",
"title": "Historical examples"
},
{
"paragraph_id": 17,
"text": "An article in the Social Science Journal attempts to provide a simple, common-sense, definition of interdisciplinarity, bypassing the difficulties of defining that concept and obviating the need for such related concepts as transdisciplinarity, pluridisciplinarity, and multidisciplinary:",
"title": "Efforts to simplify and defend the concept"
},
{
"paragraph_id": 18,
"text": "To begin with, a discipline can be conveniently defined as any comparatively self-contained and isolated domain of human experience which possesses its own community of experts. Interdisciplinarity is best seen as bringing together distinctive components of two or more disciplines. In academic discourse, interdisciplinarity typically applies to four realms: knowledge, research, education, and theory. Interdisciplinary knowledge involves familiarity with components of two or more disciplines. Interdisciplinary research combines components of two or more disciplines in the search or creation of new knowledge, operations, or artistic expressions. Interdisciplinary education merges components of two or more disciplines in a single program of instruction. Interdisciplinary theory takes interdisciplinary knowledge, research, or education as its main objects of study.",
"title": "Efforts to simplify and defend the concept"
},
{
"paragraph_id": 19,
"text": "In turn, interdisciplinary richness of any two instances of knowledge, research, or education can be ranked by weighing four variables: number of disciplines involved, the \"distance\" between them, the novelty of any particular combination, and their extent of integration.",
"title": "Efforts to simplify and defend the concept"
},
{
"paragraph_id": 20,
"text": "Interdisciplinary knowledge and research are important because:",
"title": "Efforts to simplify and defend the concept"
},
{
"paragraph_id": 21,
"text": "\"The modern mind divides, specializes, thinks in categories: the Greek instinct was the opposite, to take the widest view, to see things as an organic whole [...]. The Olympic games were designed to test the arete of the whole man, not a merely specialized skill [...]. The great event was the pentathlon, if you won this, you were a man. Needless to say, the Marathon race was never heard of until modern times: the Greeks would have regarded it as a monstrosity.\"",
"title": "Quotations"
},
{
"paragraph_id": 22,
"text": "\"Previously, men could be divided simply into the learned and the ignorant, those more or less the one, and those more or less the other. But your specialist cannot be brought in under either of these two categories. He is not learned, for he is formally ignorant of all that does not enter into his specialty; but neither is he ignorant, because he is 'a scientist,' and 'knows' very well his own tiny portion of the universe. We shall have to say that he is a learned ignoramus, which is a very serious matter, as it implies that he is a person who is ignorant, not in the fashion of the ignorant man, but with all the petulance of one who is learned in his own special line.\"",
"title": "Quotations"
},
{
"paragraph_id": 23,
"text": "\"It is the custom among those who are called 'practical' men to condemn any man capable of a wide survey as a visionary: no man is thought worthy of a voice in politics unless he ignores or does not know nine-tenths of the most important relevant facts.\"",
"title": "Quotations"
}
]
| Interdisciplinarity or interdisciplinary studies involves the combination of multiple academic disciplines into one activity. It draws knowledge from several other fields like sociology, anthropology, psychology, economics, etc. It is about creating something by thinking across boundaries. It is related to an interdiscipline or an interdisciplinary field, which is an organizational unit that crosses traditional boundaries between academic disciplines or schools of thought, as new needs and professions emerge. Large engineering teams are usually interdisciplinary, as a power station or mobile phone or other project requires the melding of several specialties. However, the term "interdisciplinary" is sometimes confined to academic settings. The term interdisciplinary is applied within education and training pedagogies to describe studies that use methods and insights of several established disciplines or traditional fields of study. Interdisciplinarity involves researchers, students, and teachers in the goals of connecting and integrating several academic schools of thought, professions, or technologies—along with their specific perspectives—in the pursuit of a common task. The epidemiology of HIV/AIDS or global warming requires understanding of diverse disciplines to solve complex problems. Interdisciplinary may be applied where the subject is felt to have been neglected or even misrepresented in the traditional disciplinary structure of research institutions, for example, women's studies or ethnic area studies. Interdisciplinarity can likewise be applied to complex subjects that can only be understood by combining the perspectives of two or more fields. The adjective interdisciplinary is most often used in educational circles when researchers from two or more disciplines pool their approaches and modify them so that they are better suited to the problem at hand, including the case of the team-taught course where students are required to understand a given subject in terms of multiple traditional disciplines. For example, the subject of land use may appear differently when examined by different disciplines, for instance, biology, chemistry, economics, geography, and politics. | 2001-10-29T22:02:41Z | 2023-12-19T20:27:07Z | [
"Template:Citation",
"Template:ISBN",
"Template:Commons category",
"Template:Use dmy dates",
"Template:Div col",
"Template:Cite journal",
"Template:Cite web",
"Template:Webarchive",
"Template:Div col end",
"Template:Cite book",
"Template:Cite news",
"Template:Engineering fields",
"Template:Authority control",
"Template:Science",
"Template:Citation needed",
"Template:Blockquote",
"Template:ISSN",
"Template:-",
"Template:Short description",
"Template:Research",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Interdisciplinarity |
15,205 | Insertion sort | Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:
When people manually sort cards in a bridge hand, most use a method that is similar to insertion sort.
Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain.
Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. At each array-position, it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). If larger, it leaves the element in place and moves to the next. If smaller, it finds the correct position within the sorted list, shifts all the larger values up to make a space, and inserts into that correct position.
The resulting array after k iterations has the property where the first k + 1 entries are sorted ("+1" because the first entry is skipped). In each iteration the first remaining entry of the input is removed, and inserted into the result at the correct position, thus extending the result:
becomes
with each element greater than x copied to the right as it is compared against x.
The most common variant of insertion sort, which operates on arrays, can be described as follows:
Pseudocode of the complete algorithm follows, where the arrays are zero-based:
The outer loop runs over all the elements except the first one, because the single-element prefix A[0:1] is trivially sorted, so the invariant that the first i entries are sorted is true from the start. The inner loop moves element A[i] to its correct place so that after the loop, the first i+1 elements are sorted. Note that the and-operator in the test must use short-circuit evaluation, otherwise the test might result in an array bounds error, when j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).
After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ← x (where x is a temporary variable), a slightly faster version can be produced that moves A[i] to its position in one go and only performs one assignment in the inner loop body:
The new inner loop shifts elements to the right to clear a spot for x = A[i].
The algorithm can also be implemented in a recursive way. The recursion just replaces the outer loop, calling itself and storing successively smaller values of n on the stack until n equals 0, where the function then returns up the call chain to execute the code after each recursive call starting with n equal to 1, with n increasing by 1 as each instance of the function returns to the prior instance. The initial call would be insertionSortR(A, length(A)-1).
It does not make the code any shorter, it also does not reduce the execution time, but it increases the additional memory consumption from O(1) to O(N) (at the deepest level of recursion the stack contains N references to the A array, each with accompanying value of variable n from N down to 1).
The best case input is an array that is already sorted. In this case insertion sort has a linear running time (i.e., O(n)). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.
The simplest worst case input is an array sorted in reverse order. The set of all worst case inputs consists of all arrays where each element is the smallest or second-smallest of the elements before it. In these cases every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. This gives insertion sort a quadratic running time (i.e., O(n)).
The average case is also quadratic, which makes insertion sort impractical for sorting large arrays. However, insertion sort is one of the fastest algorithms for sorting very small arrays, even faster than quicksort; indeed, good quicksort implementations use insertion sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact threshold must be determined experimentally and depends on the machine, but is commonly around ten.
Example: The following table shows the steps for sorting the sequence {3, 7, 4, 9, 5, 2, 6, 1}. In each step, the key under consideration is underlined. The key that was moved (or left in place because it was the biggest yet considered) in the previous step is marked with an asterisk.
Insertion sort is very similar to selection sort. As in selection sort, after k passes through the array, the first k elements are in sorted order. However, the fundamental difference between the two algorithms is that insertion sort scans backwards from the current key, while selection sort scans forwards. This results in selection sort making the first k elements the k smallest elements of the unsorted input, while in insertion sort they are simply the first k elements of the input.
The primary advantage of insertion sort over selection sort is that selection sort must always scan all remaining elements to find the absolute smallest element in the unsorted portion of the list, while insertion sort requires only a single comparison when the (k + 1)-st element is greater than the k-th element; when this is frequently true (such as if the input array is already sorted or partially sorted), insertion sort is distinctly more efficient compared to selection sort. On average (assuming the rank of the (k + 1)-st element rank is random), insertion sort will require comparing and shifting half of the previous k elements, meaning that insertion sort will perform about half as many comparisons as selection sort on average.
In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort performs just as many comparisons as selection sort. However, a disadvantage of insertion sort over selection sort is that it requires more writes due to the fact that, on each iteration, inserting the (k + 1)-st element into the sorted portion of the array requires many element swaps to shift all of the following elements, while only a single swap is required for each iteration of selection sort. In general, insertion sort will write to the array O(n) times, whereas selection sort will write only O(n) times. For this reason selection sort may be preferable in cases where writing to memory is significantly more expensive than reading, such as with EEPROM or flash memory.
While some divide-and-conquer algorithms such as quicksort and mergesort outperform insertion sort for larger arrays, non-recursive sorting algorithms such as insertion sort or selection sort are generally faster for very small arrays (the exact size varies by environment and implementation, but is typically between 7 and 50 elements). Therefore, a useful optimization in the implementation of those algorithms is a hybrid approach, using the simpler algorithm when the array has been divided to a small size.
D.L. Shell made substantial improvements to the algorithm; the modified version is called Shell sort. The sorting algorithm compares elements separated by a distance that decreases on each pass. Shell sort has distinctly improved running times in practical work, with two simple variants requiring O(n) and O(n) running time.
If the cost of comparisons exceeds the cost of swaps, as is the case for example with string keys stored by reference or with human interaction (such as choosing one of a pair displayed side-by-side), then using binary insertion sort may yield better performance. Binary insertion sort employs a binary search to determine the correct location to insert new elements, and therefore performs ⌈log2 n⌉ comparisons in the worst case. When each element in the array is searched for and inserted this is O(n log n). The algorithm as a whole still has a running time of O(n) on average because of the series of swaps required for each insertion.
The number of swaps can be reduced by calculating the position of multiple elements before moving them. For example, if the target position of two elements is calculated before they are moved into the proper position, the number of swaps can be reduced by about 25% for random data. In the extreme case, this variant works similar to merge sort.
A variant named binary merge sort uses a binary insertion sort to sort groups of 32 elements, followed by a final sort using merge sort. It combines the speed of insertion sort on small data sets with the speed of merge sort on large data sets.
To avoid having to make a series of swaps for each insertion, the input could be stored in a linked list, which allows elements to be spliced into or out of the list in constant time when the position in the list is known. However, searching a linked list requires sequentially following the links to the desired position: a linked list does not have random access, so it cannot use a faster method such as binary search. Therefore, the running time required for searching is O(n), and the time for sorting is O(n). If a more sophisticated data structure (e.g., heap or binary tree) is used, the time required for searching and insertion can be reduced significantly; this is the essence of heap sort and binary tree sort.
In 2006 Bender, Martin Farach-Colton, and Mosteiro published a new variant of insertion sort called library sort or gapped insertion sort that leaves a small number of unused spaces (i.e., "gaps") spread throughout the array. The benefit is that insertions need only shift elements over until a gap is reached. The authors show that this sorting algorithm runs with high probability in O(n log n) time.
If a skip list is used, the insertion time is brought down to O(log n), and swaps are not needed because the skip list is implemented on a linked list structure. The final running time for insertion would be O(n log n).
If the items are stored in a linked list, then the list can be sorted with O(1) additional space. The algorithm starts with an initially empty (and therefore trivially sorted) list. The input items are taken off the list one at a time, and then inserted in the proper place in the sorted list. When the input list is empty, the sorted list has the desired result.
The algorithm below uses a trailing pointer for the insertion into the sorted list. A simpler recursive method rebuilds the list each time (rather than splicing) and can use O(n) stack space. | [
{
"paragraph_id": 0,
"text": "Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:",
"title": ""
},
{
"paragraph_id": 1,
"text": "When people manually sort cards in a bridge hand, most use a method that is similar to insertion sort.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Insertion sort iterates, consuming one input element each repetition, and grows a sorted output list. At each iteration, insertion sort removes one element from the input data, finds the location it belongs within the sorted list, and inserts it there. It repeats until no input elements remain.",
"title": "Algorithm"
},
{
"paragraph_id": 3,
"text": "Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it. At each array-position, it checks the value there against the largest value in the sorted list (which happens to be next to it, in the previous array-position checked). If larger, it leaves the element in place and moves to the next. If smaller, it finds the correct position within the sorted list, shifts all the larger values up to make a space, and inserts into that correct position.",
"title": "Algorithm"
},
{
"paragraph_id": 4,
"text": "The resulting array after k iterations has the property where the first k + 1 entries are sorted (\"+1\" because the first entry is skipped). In each iteration the first remaining entry of the input is removed, and inserted into the result at the correct position, thus extending the result:",
"title": "Algorithm"
},
{
"paragraph_id": 5,
"text": "",
"title": "Algorithm"
},
{
"paragraph_id": 6,
"text": "becomes",
"title": "Algorithm"
},
{
"paragraph_id": 7,
"text": "",
"title": "Algorithm"
},
{
"paragraph_id": 8,
"text": "with each element greater than x copied to the right as it is compared against x.",
"title": "Algorithm"
},
{
"paragraph_id": 9,
"text": "The most common variant of insertion sort, which operates on arrays, can be described as follows:",
"title": "Algorithm"
},
{
"paragraph_id": 10,
"text": "Pseudocode of the complete algorithm follows, where the arrays are zero-based:",
"title": "Algorithm"
},
{
"paragraph_id": 11,
"text": "The outer loop runs over all the elements except the first one, because the single-element prefix A[0:1] is trivially sorted, so the invariant that the first i entries are sorted is true from the start. The inner loop moves element A[i] to its correct place so that after the loop, the first i+1 elements are sorted. Note that the and-operator in the test must use short-circuit evaluation, otherwise the test might result in an array bounds error, when j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).",
"title": "Algorithm"
},
{
"paragraph_id": 12,
"text": "After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ← x (where x is a temporary variable), a slightly faster version can be produced that moves A[i] to its position in one go and only performs one assignment in the inner loop body:",
"title": "Algorithm"
},
{
"paragraph_id": 13,
"text": "The new inner loop shifts elements to the right to clear a spot for x = A[i].",
"title": "Algorithm"
},
{
"paragraph_id": 14,
"text": "The algorithm can also be implemented in a recursive way. The recursion just replaces the outer loop, calling itself and storing successively smaller values of n on the stack until n equals 0, where the function then returns up the call chain to execute the code after each recursive call starting with n equal to 1, with n increasing by 1 as each instance of the function returns to the prior instance. The initial call would be insertionSortR(A, length(A)-1).",
"title": "Algorithm"
},
{
"paragraph_id": 15,
"text": "It does not make the code any shorter, it also does not reduce the execution time, but it increases the additional memory consumption from O(1) to O(N) (at the deepest level of recursion the stack contains N references to the A array, each with accompanying value of variable n from N down to 1).",
"title": "Algorithm"
},
{
"paragraph_id": 16,
"text": "The best case input is an array that is already sorted. In this case insertion sort has a linear running time (i.e., O(n)). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.",
"title": "Best, worst, and average cases"
},
{
"paragraph_id": 17,
"text": "The simplest worst case input is an array sorted in reverse order. The set of all worst case inputs consists of all arrays where each element is the smallest or second-smallest of the elements before it. In these cases every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. This gives insertion sort a quadratic running time (i.e., O(n)).",
"title": "Best, worst, and average cases"
},
{
"paragraph_id": 18,
"text": "The average case is also quadratic, which makes insertion sort impractical for sorting large arrays. However, insertion sort is one of the fastest algorithms for sorting very small arrays, even faster than quicksort; indeed, good quicksort implementations use insertion sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact threshold must be determined experimentally and depends on the machine, but is commonly around ten.",
"title": "Best, worst, and average cases"
},
{
"paragraph_id": 19,
"text": "Example: The following table shows the steps for sorting the sequence {3, 7, 4, 9, 5, 2, 6, 1}. In each step, the key under consideration is underlined. The key that was moved (or left in place because it was the biggest yet considered) in the previous step is marked with an asterisk.",
"title": "Best, worst, and average cases"
},
{
"paragraph_id": 20,
"text": "Insertion sort is very similar to selection sort. As in selection sort, after k passes through the array, the first k elements are in sorted order. However, the fundamental difference between the two algorithms is that insertion sort scans backwards from the current key, while selection sort scans forwards. This results in selection sort making the first k elements the k smallest elements of the unsorted input, while in insertion sort they are simply the first k elements of the input.",
"title": "Relation to other sorting algorithms"
},
{
"paragraph_id": 21,
"text": "The primary advantage of insertion sort over selection sort is that selection sort must always scan all remaining elements to find the absolute smallest element in the unsorted portion of the list, while insertion sort requires only a single comparison when the (k + 1)-st element is greater than the k-th element; when this is frequently true (such as if the input array is already sorted or partially sorted), insertion sort is distinctly more efficient compared to selection sort. On average (assuming the rank of the (k + 1)-st element rank is random), insertion sort will require comparing and shifting half of the previous k elements, meaning that insertion sort will perform about half as many comparisons as selection sort on average.",
"title": "Relation to other sorting algorithms"
},
{
"paragraph_id": 22,
"text": "In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort performs just as many comparisons as selection sort. However, a disadvantage of insertion sort over selection sort is that it requires more writes due to the fact that, on each iteration, inserting the (k + 1)-st element into the sorted portion of the array requires many element swaps to shift all of the following elements, while only a single swap is required for each iteration of selection sort. In general, insertion sort will write to the array O(n) times, whereas selection sort will write only O(n) times. For this reason selection sort may be preferable in cases where writing to memory is significantly more expensive than reading, such as with EEPROM or flash memory.",
"title": "Relation to other sorting algorithms"
},
{
"paragraph_id": 23,
"text": "While some divide-and-conquer algorithms such as quicksort and mergesort outperform insertion sort for larger arrays, non-recursive sorting algorithms such as insertion sort or selection sort are generally faster for very small arrays (the exact size varies by environment and implementation, but is typically between 7 and 50 elements). Therefore, a useful optimization in the implementation of those algorithms is a hybrid approach, using the simpler algorithm when the array has been divided to a small size.",
"title": "Relation to other sorting algorithms"
},
{
"paragraph_id": 24,
"text": "D.L. Shell made substantial improvements to the algorithm; the modified version is called Shell sort. The sorting algorithm compares elements separated by a distance that decreases on each pass. Shell sort has distinctly improved running times in practical work, with two simple variants requiring O(n) and O(n) running time.",
"title": "Variants"
},
{
"paragraph_id": 25,
"text": "If the cost of comparisons exceeds the cost of swaps, as is the case for example with string keys stored by reference or with human interaction (such as choosing one of a pair displayed side-by-side), then using binary insertion sort may yield better performance. Binary insertion sort employs a binary search to determine the correct location to insert new elements, and therefore performs ⌈log2 n⌉ comparisons in the worst case. When each element in the array is searched for and inserted this is O(n log n). The algorithm as a whole still has a running time of O(n) on average because of the series of swaps required for each insertion.",
"title": "Variants"
},
{
"paragraph_id": 26,
"text": "The number of swaps can be reduced by calculating the position of multiple elements before moving them. For example, if the target position of two elements is calculated before they are moved into the proper position, the number of swaps can be reduced by about 25% for random data. In the extreme case, this variant works similar to merge sort.",
"title": "Variants"
},
{
"paragraph_id": 27,
"text": "A variant named binary merge sort uses a binary insertion sort to sort groups of 32 elements, followed by a final sort using merge sort. It combines the speed of insertion sort on small data sets with the speed of merge sort on large data sets.",
"title": "Variants"
},
{
"paragraph_id": 28,
"text": "To avoid having to make a series of swaps for each insertion, the input could be stored in a linked list, which allows elements to be spliced into or out of the list in constant time when the position in the list is known. However, searching a linked list requires sequentially following the links to the desired position: a linked list does not have random access, so it cannot use a faster method such as binary search. Therefore, the running time required for searching is O(n), and the time for sorting is O(n). If a more sophisticated data structure (e.g., heap or binary tree) is used, the time required for searching and insertion can be reduced significantly; this is the essence of heap sort and binary tree sort.",
"title": "Variants"
},
{
"paragraph_id": 29,
"text": "In 2006 Bender, Martin Farach-Colton, and Mosteiro published a new variant of insertion sort called library sort or gapped insertion sort that leaves a small number of unused spaces (i.e., \"gaps\") spread throughout the array. The benefit is that insertions need only shift elements over until a gap is reached. The authors show that this sorting algorithm runs with high probability in O(n log n) time.",
"title": "Variants"
},
{
"paragraph_id": 30,
"text": "If a skip list is used, the insertion time is brought down to O(log n), and swaps are not needed because the skip list is implemented on a linked list structure. The final running time for insertion would be O(n log n).",
"title": "Variants"
},
{
"paragraph_id": 31,
"text": "If the items are stored in a linked list, then the list can be sorted with O(1) additional space. The algorithm starts with an initially empty (and therefore trivially sorted) list. The input items are taken off the list one at a time, and then inserted in the proper place in the sorted list. When the input list is empty, the sorted list has the desired result.",
"title": "Variants"
},
{
"paragraph_id": 32,
"text": "The algorithm below uses a trailing pointer for the insertion into the sorted list. A simpler recursive method rebuilds the list each time (rather than splicing) and can use O(n) stack space.",
"title": "Variants"
}
]
| Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time by comparisons. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages: Simple implementation: Jon Bentley shows a three-line C/C++ version that is five lines when optimized.
Efficient for (quite) small data sets, much like other quadratic (i.e., O(n2)) sorting algorithms
More efficient in practice than most other simple quadratic algorithms such as selection sort or bubble sort
Adaptive, i.e., efficient for data sets that are already substantially sorted: the time complexity is O(kn) when each element in the input is no more than k places away from its sorted position
Stable; i.e., does not change the relative order of elements with equal keys
In-place; i.e., only requires a constant amount O(1) of additional memory space
Online; i.e., can sort a list as it receives it When people manually sort cards in a bridge hand, most use a method that is similar to insertion sort. | 2001-10-29T20:10:06Z | 2023-12-29T00:32:23Z | [
"Template:Short description",
"Template:Math",
"Template:Introduction to Algorithms",
"Template:Cite web",
"Template:Cite journal",
"Template:Webarchive",
"Template:Reflist",
"Template:Cite book",
"Template:Citation",
"Template:Wikibooks",
"Template:Infobox Algorithm",
"Template:Mvar",
"Template:Code",
"Template:Commons category",
"Template:Sorting"
]
| https://en.wikipedia.org/wiki/Insertion_sort |
15,207 | Ig Nobel Prize | The Ig Nobel Prize (/ˌɪɡnoʊˈbɛl/ IG-noh-BEL) is a satiric prize awarded annually since 1991 to celebrate ten unusual or trivial achievements in scientific research. Its aim is to "honor achievements that first make people laugh, and then make them think." The name of the award is a pun on the Nobel Prize, which it parodies, and on the word ignoble.
Organized by the scientific humor magazine Annals of Improbable Research (AIR), the Ig Nobel Prizes are presented by Nobel laureates in a ceremony at the Sanders Theater at Harvard University, and are followed by the winners' public lectures at the Massachusetts Institute of Technology.
The Ig Nobels were created in 1991 by Marc Abrahams, editor and co-founder of the Annals of Improbable Research, a former editor-in-chief of the Journal of Irreproducible Results, who has been the master of ceremonies at all awards ceremonies. Awards were presented at that time for discoveries "that cannot, or should not, be reproduced". Ten prizes are awarded each year in many categories, including the Nobel Prize categories of physics, chemistry, physiology/medicine, literature, and peace, but also other categories such as public health, engineering, biology, and interdisciplinary research. The Ig Nobel Prizes recognize genuine achievements, with the exception of three prizes awarded in the first year to fictitious scientists Josiah S. Carberry, Paul DeFanti, and Thomas Kyle.
The awards are sometimes criticism via satire, as in the two awards given for homeopathy research, prizes in "science education" to the Kansas State Department of Education and Colorado State Board of Education for their stance regarding the teaching of evolution, and the prize awarded to Social Text after the Sokal affair. Most often, however, they draw attention to scientific articles that have some humorous or unexpected aspect. Examples range from the discovery that the presence of humans tends to sexually arouse ostriches, to the statement that black holes fulfill all the technical requirements for being the location of Hell, to research on the "five-second rule", a tongue-in-cheek belief that food dropped on the floor will not become contaminated if it is picked up within five seconds.
Sir Andre Geim, who had been awarded an Ig Nobel Prize in 2000 for levitating a frog by magnetism, was awarded a Nobel Prize in physics in 2010 for his work with the electromagnetic properties of graphene. He is the only individual, as of 2023, to have received both a Nobel and an Ig Nobel.
The prizes are mostly presented by Nobel laureates, originally at a ceremony in a lecture hall at MIT but since 1994 in the Sanders Theater at Harvard University. Due to the COVID-19 pandemic, the 2020 and 2021 event was held fully online. The event contains a number of running jokes, including Miss Sweetie Poo, a little girl who repeatedly cries out, "Please stop: I'm bored", in a high-pitched voice if speakers go on too long. The awards ceremony is traditionally closed with the words: "If you didn't win a prize—and especially if you did—better luck next year!"
The ceremony is co-sponsored by the Harvard Computer Society, the Harvard–Radcliffe Science Fiction Association and the Harvard–Radcliffe Society of Physics Students.
Throwing paper planes onto the stage is a long-standing tradition. For many years Professor Roy J. Glauber swept the stage clean of the airplanes as the official "Keeper of the Broom". Glauber could not attend the 2005 awards because he was traveling to Stockholm to claim a genuine Nobel Prize in Physics.
The "Parade of Ignitaries" into the hall includes supporting groups. At the 1997 ceremonies, a team of "cryogenic sex researchers" distributed a pamphlet titled "Safe Sex at Four Kelvin." Delegates from the Museum of Bad Art are often on hand to display some pieces from their collection.
The traditional closing line is: "If you didn’t win an Ig Nobel Prize tonight—and especially if you did—better luck next year."
The ceremony is recorded and broadcast on National Public Radio in the US and is shown live over the Internet. The recording is broadcast each year, on the Friday after US Thanksgiving, on the public radio program Science Friday. In recognition of this, the audience chants the name of the radio show's host, Ira Flatow.
Two books have been published with write-ups on some winners: The Ig Nobel Prize and The Ig Nobel Prize 2, the latter of which was later retitled The Man Who Tried to Clone Himself.
An Ig Nobel Tour has been an annual part of National Science week in the United Kingdom since 2003. The tour has also traveled to Australia several times, Aarhus University in Denmark in April 2009, Italy and The Netherlands.
A September 2009 article in The National titled "A noble side to Ig Nobels" says that, although the Ig Nobel Awards are veiled criticism of trivial research, history shows that trivial research sometimes leads to important breakthroughs. For instance, in 2006, a study showing that one of the malaria mosquitoes (Anopheles gambiae) is attracted equally to the smell of Limburger cheese and the smell of human feet earned the Ig Nobel Prize in the area of biology. As a direct result of these findings, traps baited with this cheese have been placed in strategic locations in some parts of Africa to combat the epidemic of malaria. Andre Geim, before sharing the 2010 Nobel Prize in Physics for his research on graphene, shared the Physics Ig Nobel in 2000 with Michael Berry for the magnetic levitation of a frog, which by 2022 was reportedly part of the inspiration for China's lunar gravity research facility. | [
{
"paragraph_id": 0,
"text": "The Ig Nobel Prize (/ˌɪɡnoʊˈbɛl/ IG-noh-BEL) is a satiric prize awarded annually since 1991 to celebrate ten unusual or trivial achievements in scientific research. Its aim is to \"honor achievements that first make people laugh, and then make them think.\" The name of the award is a pun on the Nobel Prize, which it parodies, and on the word ignoble.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Organized by the scientific humor magazine Annals of Improbable Research (AIR), the Ig Nobel Prizes are presented by Nobel laureates in a ceremony at the Sanders Theater at Harvard University, and are followed by the winners' public lectures at the Massachusetts Institute of Technology.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Ig Nobels were created in 1991 by Marc Abrahams, editor and co-founder of the Annals of Improbable Research, a former editor-in-chief of the Journal of Irreproducible Results, who has been the master of ceremonies at all awards ceremonies. Awards were presented at that time for discoveries \"that cannot, or should not, be reproduced\". Ten prizes are awarded each year in many categories, including the Nobel Prize categories of physics, chemistry, physiology/medicine, literature, and peace, but also other categories such as public health, engineering, biology, and interdisciplinary research. The Ig Nobel Prizes recognize genuine achievements, with the exception of three prizes awarded in the first year to fictitious scientists Josiah S. Carberry, Paul DeFanti, and Thomas Kyle.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The awards are sometimes criticism via satire, as in the two awards given for homeopathy research, prizes in \"science education\" to the Kansas State Department of Education and Colorado State Board of Education for their stance regarding the teaching of evolution, and the prize awarded to Social Text after the Sokal affair. Most often, however, they draw attention to scientific articles that have some humorous or unexpected aspect. Examples range from the discovery that the presence of humans tends to sexually arouse ostriches, to the statement that black holes fulfill all the technical requirements for being the location of Hell, to research on the \"five-second rule\", a tongue-in-cheek belief that food dropped on the floor will not become contaminated if it is picked up within five seconds.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Sir Andre Geim, who had been awarded an Ig Nobel Prize in 2000 for levitating a frog by magnetism, was awarded a Nobel Prize in physics in 2010 for his work with the electromagnetic properties of graphene. He is the only individual, as of 2023, to have received both a Nobel and an Ig Nobel.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The prizes are mostly presented by Nobel laureates, originally at a ceremony in a lecture hall at MIT but since 1994 in the Sanders Theater at Harvard University. Due to the COVID-19 pandemic, the 2020 and 2021 event was held fully online. The event contains a number of running jokes, including Miss Sweetie Poo, a little girl who repeatedly cries out, \"Please stop: I'm bored\", in a high-pitched voice if speakers go on too long. The awards ceremony is traditionally closed with the words: \"If you didn't win a prize—and especially if you did—better luck next year!\"",
"title": "Ceremony"
},
{
"paragraph_id": 6,
"text": "The ceremony is co-sponsored by the Harvard Computer Society, the Harvard–Radcliffe Science Fiction Association and the Harvard–Radcliffe Society of Physics Students.",
"title": "Ceremony"
},
{
"paragraph_id": 7,
"text": "Throwing paper planes onto the stage is a long-standing tradition. For many years Professor Roy J. Glauber swept the stage clean of the airplanes as the official \"Keeper of the Broom\". Glauber could not attend the 2005 awards because he was traveling to Stockholm to claim a genuine Nobel Prize in Physics.",
"title": "Ceremony"
},
{
"paragraph_id": 8,
"text": "The \"Parade of Ignitaries\" into the hall includes supporting groups. At the 1997 ceremonies, a team of \"cryogenic sex researchers\" distributed a pamphlet titled \"Safe Sex at Four Kelvin.\" Delegates from the Museum of Bad Art are often on hand to display some pieces from their collection.",
"title": "Ceremony"
},
{
"paragraph_id": 9,
"text": "The traditional closing line is: \"If you didn’t win an Ig Nobel Prize tonight—and especially if you did—better luck next year.\"",
"title": "Ceremony"
},
{
"paragraph_id": 10,
"text": "The ceremony is recorded and broadcast on National Public Radio in the US and is shown live over the Internet. The recording is broadcast each year, on the Friday after US Thanksgiving, on the public radio program Science Friday. In recognition of this, the audience chants the name of the radio show's host, Ira Flatow.",
"title": "Outreach"
},
{
"paragraph_id": 11,
"text": "Two books have been published with write-ups on some winners: The Ig Nobel Prize and The Ig Nobel Prize 2, the latter of which was later retitled The Man Who Tried to Clone Himself.",
"title": "Outreach"
},
{
"paragraph_id": 12,
"text": "An Ig Nobel Tour has been an annual part of National Science week in the United Kingdom since 2003. The tour has also traveled to Australia several times, Aarhus University in Denmark in April 2009, Italy and The Netherlands.",
"title": "Outreach"
},
{
"paragraph_id": 13,
"text": "A September 2009 article in The National titled \"A noble side to Ig Nobels\" says that, although the Ig Nobel Awards are veiled criticism of trivial research, history shows that trivial research sometimes leads to important breakthroughs. For instance, in 2006, a study showing that one of the malaria mosquitoes (Anopheles gambiae) is attracted equally to the smell of Limburger cheese and the smell of human feet earned the Ig Nobel Prize in the area of biology. As a direct result of these findings, traps baited with this cheese have been placed in strategic locations in some parts of Africa to combat the epidemic of malaria. Andre Geim, before sharing the 2010 Nobel Prize in Physics for his research on graphene, shared the Physics Ig Nobel in 2000 with Michael Berry for the magnetic levitation of a frog, which by 2022 was reportedly part of the inspiration for China's lunar gravity research facility.",
"title": "Reception"
}
]
| The Ig Nobel Prize is a satiric prize awarded annually since 1991 to celebrate ten unusual or trivial achievements in scientific research. Its aim is to "honor achievements that first make people laugh, and then make them think." The name of the award is a pun on the Nobel Prize, which it parodies, and on the word ignoble. Organized by the scientific humor magazine Annals of Improbable Research (AIR), the Ig Nobel Prizes are presented by Nobel laureates in a ceremony at the Sanders Theater at Harvard University, and are followed by the winners' public lectures at the Massachusetts Institute of Technology. | 2001-10-29T21:40:05Z | 2023-10-03T17:59:12Z | [
"Template:Cite magazine",
"Template:ISBN",
"Template:Cite journal",
"Template:Official website",
"Template:IPAc-en",
"Template:Respell",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite web",
"Template:Use mdy dates",
"Template:Cite news",
"Template:Cbignore",
"Template:Cite report",
"Template:Short description",
"Template:Cite book",
"Template:Commons category",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Ig_Nobel_Prize |
15,208 | Isaac Albéniz | Isaac Manuel Francisco Albéniz y Pascual (Spanish pronunciation: [iˈsak alˈβeniθ]; 29 May 1860 – 18 May 1909) was a Spanish virtuoso pianist, composer, and conductor. He is one of the foremost composers of the Post-Romantic era who also had a significant influence on his contemporaries and younger composers. He is best known for his piano works based on Spanish folk music idioms. Isaac Albéniz was close to the Generation of '98.
Transcriptions of many of his pieces, such as Asturias (Leyenda), Granada, Sevilla, Cadiz, Córdoba, Cataluña, Mallorca, and Tango in D, are important pieces for classical guitar, though he never composed for the guitar. Some of Albéniz's personal papers of are held in the Library of Catalonia.
Born in Camprodon, province of Girona, to Ángel Albéniz (a customs official) and his wife, Maria de los Dolores Pascual, Albéniz was a child prodigy who first performed at the age of four. At age seven, after apparently taking lessons from Antoine François Marmontel, he passed the entrance examination for piano at the Conservatoire de Paris, but he was refused admission because he was believed to be too young. By the time he had reached 12, he had made many attempts to run away from home.
His concert career began at the age of nine when his father toured both Isaac and his sister, Clementina, throughout northern Spain. A popular myth is that at the age of twelve Albéniz stowed away in a ship bound for Buenos Aires. He then found himself in Cuba, then in the United States, giving concerts in New York and San Francisco and then travelled to Liverpool, London and Leipzig. By age 15, he had already given concerts worldwide. This story is not entirely false, Albéniz did travel the world as a performer; however, he was accompanied by his father, who as a customs agent was required to travel frequently. This can be attested by comparing Isaac's concert dates with his father's travel itinerary.
In 1876, after a short stay at the Leipzig Conservatory, he went to study at the Royal Conservatory of Brussels after King Alfonso's personal secretary, Guillermo Morphy, obtained him a royal grant. Count Morphy thought highly of Albéniz, who would later dedicate Sevilla to Morphy's wife when it premiered in Paris in January 1886.
In 1880 Albéniz went to Budapest, Hungary, to study with Franz Liszt, only to find out that Liszt was in Weimar, Germany.
In 1883 he met the teacher and composer Felip Pedrell, who inspired him to write Spanish music such as the Chants d'Espagne. The first movement (Prelude) of that suite, later retitled after the composer's death as Asturias (Leyenda), is now part of the classical guitar repertoire, even though it was originally composed for piano. Many of Albéniz's other compositions were also transcribed for guitar by Francisco Tárrega. At the 1888 Barcelona Universal Exposition, the piano manufacturer Érard sponsored a series of 20 concerts featuring Albéniz's music.
The apex of Albéniz's concert career is considered to be 1889 to 1892 when he had concert tours throughout Europe. During the 1890s Albéniz lived in London and Paris. For London he wrote some musical comedies which brought him to the attention of the wealthy Francis Money-Coutts, 5th Baron Latymer. Money-Coutts commissioned and provided him with librettos for the opera Henry Clifford and for a projected trilogy of Arthurian operas. The first of these, Merlin (1898–1902), was thought to have been lost but has recently been reconstructed and performed. Albéniz never completed Lancelot (only the first act is finished, as a vocal and piano score), and he never began Guinevere, the final part.
In 1900 he started to suffer from Bright's disease and returned to writing piano music. Between 1905 and 1908 he composed his final masterpiece, Iberia (1908), a suite of twelve piano "impressions".
In 1883 the composer married his student Rosina Jordana. They had two children who lived into adulthood: Laura (a painter) and Alfonso (who played for Real Madrid in the early 1900s before embarking on a career as a diplomat). Another child, Blanca, died in 1886, and two other children died in infancy. His great-granddaughter is Cécilia Attias, former wife of Nicolas Sarkozy.
Albéniz died from his kidney disease on 18 May 1909 at age 48 in Cambo-les-Bains, in Labourd, south-western France. Only a few weeks before his death, the French Government bestowed upon Albéniz the Legion of Honour, its highest honour. He is buried at the Montjuïc Cemetery, Barcelona.
Albéniz's early works were mostly "salon style" music. Albéniz's first published composition, Marcha Militar, appeared in 1868. A number of works written before this are now lost. He continued composing in traditional styles ranging from Jean-Philippe Rameau, Johann Sebastian Bach, Ludwig van Beethoven, Frédéric Chopin and Franz Liszt until the mid-1880s. He also wrote at least five zarzuelas, of which all but two are now lost.
Perhaps the best source on the works is Albéniz himself. He is quoted as commenting on his earlier period works as:
There are among them a few things that are not completely worthless. The music is a bit infantile, plain, spirited; but in the end, the people, our Spanish people, are something of all that. I believe that the people are right when they continue to be moved by Córdoba, Mallorca, by the copla of the Sevillanas, by the Serenata, and Granada. In all of them I now note that there is less musical science, less of the grand idea, but more colour, sunlight, flavour of olives. That music of youth, with its little sins and absurdities that almost point out the sentimental affectation ... appears to me like the carvings in the Alhambra, those peculiar arabesques that say nothing with their turns and shapes, but which are like the air, like the sun, like the blackbirds or like the nightingales of its gardens. They are more valuable than all else of Moorish Spain, which though we may not like it, is the true Spain.
During the late 1880s, the strong influence of Spanish style is evident in Albéniz's music. In 1883 Albéniz met the teacher and composer Felipe Pedrell. Pedrell was a leading figure in the development of nationalist Spanish music. In his book The Music of Spain, Gilbert Chase describes Pedrell's influence on Albéniz: "What Albéniz derived from Pedrell was above all a spiritual orientation, the realization of the wonderful values inherent in Spanish music." Felipe Pedrell inspired Albéniz to write Spanish music such as the Suite española, Op. 47, noted for its delicate, intricate melody and abrupt dynamic changes.
In addition to the Spanish spirit infused in Albéniz's music, he incorporated other qualities as well. In her biography of Albéniz, Pola Baytelman discerns four characteristics of the music from the middle period as follows:
1. The dance rhythms of Spain, of which there are a wide variety. 2. The use of cante jondo, which means deep or profound singing. It is the most serious and moving variety of flamenco or Spanish gypsy song, often dealing with themes of death, anguish, or religion. 3. The use of exotic scales also associated with flamenco music. The Phrygian mode is the most prominent in Albéniz's music, although he also used the Aeolian and Mixolydian modes as well as the whole-tone scale. 4. The transfer of guitar idioms into piano writing.
Following his marriage, Albéniz settled in Madrid, Spain and produced a substantial quantity of music in a relatively short period. By 1886 he had written over 50 piano pieces. Albéniz biographer Walter A. Clark says that pieces from this period received enthusiastic reception in the composer's many concerts. Chase describes music from this period,
Taking the guitar as his instrumental model, and drawing his inspiration largely from the peculiar traits of Andalusian folk music—but without using actual folk themes—Albéniz achieves a stylization of Spanish traditional idioms that while thoroughly artistic, gives a captivating impression of spontaneous improvisation... Córdoba is the piece that best represents the style of Albéniz in this period, with its hauntingly beautiful melody, set against the acrid dissonances of the plucked accompaniment imitating the notes of the Moorish guslas. Here is the heady scent of jasmines amid the swaying palm trees, the dream fantasy of an Andalusian "Arabian Nights" in which Albéniz loved to let his imagination dwell.
While Albéniz's crowning achievement, Iberia, was written in the last years of his life in France, many of its preceding works are well-known and of great interest. The five pieces in Chants d'Espagne, (Songs of Spain, published in 1892) are a solid example of the compositional ideas he was exploring in the "middle period" of his life. The suite shows what Albéniz biographer Walter Aaron Clark describes as the "first flowering of his unique creative genius", and the beginnings of compositional exploration that became the hallmark of his later works. This period also includes his operatic works—Merlin, Henry Clifford, and Pepita Jiménez. His orchestral works of this period include Spanish Rhapsody (1887) and Catalonia (1899), dedicated to Ramon Casas, who had painted his full-length portrait in 1894.
As one of the leading composers of his era, Albéniz's influences on both contemporary composers and on the future of Spanish music are profound. As a result of his extended stay in France and the friendship he formed with numerous composers there, his composition technique and harmonic language has influenced aspiring younger composers such as Claude Debussy and Maurice Ravel. His activities as conductor, performer and composer significantly raised the profile of Spanish music abroad and encouraged Spanish music and musicians in his own country.
Albéniz's works have become an important part of the repertoire of the classical guitar, many of which have been transcribed by Francisco Tárrega, Miguel Llobet and others. Asturias (Leyenda) in particular is heard most often on the guitar, as are Granada, Sevilla, Cadiz, Cataluña, Córdoba, Mallorca, and Tango in D. Gordon Crosskey and Cuban-born guitarist Manuel Barrueco have both made solo guitar arrangements of all the eight-movements in Suite española. Selections from Iberia have rarely been attempted on solo guitar but have been very effectively performed by guitar ensembles, such as the performance by John Williams and Julian Bream of Iberia's opening "Evocation". The Doors incorporated "Asturias" into their song "Spanish Caravan"; also, Iron Maiden's "To Tame a Land" uses the introduction of the piece for the song bridge. More recently, a guitar version of Granada functions as something of a love theme in Woody Allen's 2008 film Vicky Cristina Barcelona.
The theme from Asturias was incorporated or adapted in several soundtracks including the 2008 horror film Mirrors, composed by Javier Navarrete, and the Netflix TV show Godless, composed by Carlos Rafael Rivera.
In 1997 the Fundación Isaac Albéniz was founded to promote Spanish music and musicians and to act as a research centre for Albéniz and Spanish music in general.
A street in Quito, Ecuador, is named after him.
A film about his life entitled Albéniz was made in 1947. It was produced in Argentina.
On 29 May 2010, Google celebrated Isaac Albeniz's 150th Birthday with a doodle.
References
Sources | [
{
"paragraph_id": 0,
"text": "Isaac Manuel Francisco Albéniz y Pascual (Spanish pronunciation: [iˈsak alˈβeniθ]; 29 May 1860 – 18 May 1909) was a Spanish virtuoso pianist, composer, and conductor. He is one of the foremost composers of the Post-Romantic era who also had a significant influence on his contemporaries and younger composers. He is best known for his piano works based on Spanish folk music idioms. Isaac Albéniz was close to the Generation of '98.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Transcriptions of many of his pieces, such as Asturias (Leyenda), Granada, Sevilla, Cadiz, Córdoba, Cataluña, Mallorca, and Tango in D, are important pieces for classical guitar, though he never composed for the guitar. Some of Albéniz's personal papers of are held in the Library of Catalonia.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Born in Camprodon, province of Girona, to Ángel Albéniz (a customs official) and his wife, Maria de los Dolores Pascual, Albéniz was a child prodigy who first performed at the age of four. At age seven, after apparently taking lessons from Antoine François Marmontel, he passed the entrance examination for piano at the Conservatoire de Paris, but he was refused admission because he was believed to be too young. By the time he had reached 12, he had made many attempts to run away from home.",
"title": "Life"
},
{
"paragraph_id": 3,
"text": "His concert career began at the age of nine when his father toured both Isaac and his sister, Clementina, throughout northern Spain. A popular myth is that at the age of twelve Albéniz stowed away in a ship bound for Buenos Aires. He then found himself in Cuba, then in the United States, giving concerts in New York and San Francisco and then travelled to Liverpool, London and Leipzig. By age 15, he had already given concerts worldwide. This story is not entirely false, Albéniz did travel the world as a performer; however, he was accompanied by his father, who as a customs agent was required to travel frequently. This can be attested by comparing Isaac's concert dates with his father's travel itinerary.",
"title": "Life"
},
{
"paragraph_id": 4,
"text": "In 1876, after a short stay at the Leipzig Conservatory, he went to study at the Royal Conservatory of Brussels after King Alfonso's personal secretary, Guillermo Morphy, obtained him a royal grant. Count Morphy thought highly of Albéniz, who would later dedicate Sevilla to Morphy's wife when it premiered in Paris in January 1886.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "In 1880 Albéniz went to Budapest, Hungary, to study with Franz Liszt, only to find out that Liszt was in Weimar, Germany.",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "In 1883 he met the teacher and composer Felip Pedrell, who inspired him to write Spanish music such as the Chants d'Espagne. The first movement (Prelude) of that suite, later retitled after the composer's death as Asturias (Leyenda), is now part of the classical guitar repertoire, even though it was originally composed for piano. Many of Albéniz's other compositions were also transcribed for guitar by Francisco Tárrega. At the 1888 Barcelona Universal Exposition, the piano manufacturer Érard sponsored a series of 20 concerts featuring Albéniz's music.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "The apex of Albéniz's concert career is considered to be 1889 to 1892 when he had concert tours throughout Europe. During the 1890s Albéniz lived in London and Paris. For London he wrote some musical comedies which brought him to the attention of the wealthy Francis Money-Coutts, 5th Baron Latymer. Money-Coutts commissioned and provided him with librettos for the opera Henry Clifford and for a projected trilogy of Arthurian operas. The first of these, Merlin (1898–1902), was thought to have been lost but has recently been reconstructed and performed. Albéniz never completed Lancelot (only the first act is finished, as a vocal and piano score), and he never began Guinevere, the final part.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "In 1900 he started to suffer from Bright's disease and returned to writing piano music. Between 1905 and 1908 he composed his final masterpiece, Iberia (1908), a suite of twelve piano \"impressions\".",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "In 1883 the composer married his student Rosina Jordana. They had two children who lived into adulthood: Laura (a painter) and Alfonso (who played for Real Madrid in the early 1900s before embarking on a career as a diplomat). Another child, Blanca, died in 1886, and two other children died in infancy. His great-granddaughter is Cécilia Attias, former wife of Nicolas Sarkozy.",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "Albéniz died from his kidney disease on 18 May 1909 at age 48 in Cambo-les-Bains, in Labourd, south-western France. Only a few weeks before his death, the French Government bestowed upon Albéniz the Legion of Honour, its highest honour. He is buried at the Montjuïc Cemetery, Barcelona.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "Albéniz's early works were mostly \"salon style\" music. Albéniz's first published composition, Marcha Militar, appeared in 1868. A number of works written before this are now lost. He continued composing in traditional styles ranging from Jean-Philippe Rameau, Johann Sebastian Bach, Ludwig van Beethoven, Frédéric Chopin and Franz Liszt until the mid-1880s. He also wrote at least five zarzuelas, of which all but two are now lost.",
"title": "Music"
},
{
"paragraph_id": 12,
"text": "Perhaps the best source on the works is Albéniz himself. He is quoted as commenting on his earlier period works as:",
"title": "Music"
},
{
"paragraph_id": 13,
"text": "There are among them a few things that are not completely worthless. The music is a bit infantile, plain, spirited; but in the end, the people, our Spanish people, are something of all that. I believe that the people are right when they continue to be moved by Córdoba, Mallorca, by the copla of the Sevillanas, by the Serenata, and Granada. In all of them I now note that there is less musical science, less of the grand idea, but more colour, sunlight, flavour of olives. That music of youth, with its little sins and absurdities that almost point out the sentimental affectation ... appears to me like the carvings in the Alhambra, those peculiar arabesques that say nothing with their turns and shapes, but which are like the air, like the sun, like the blackbirds or like the nightingales of its gardens. They are more valuable than all else of Moorish Spain, which though we may not like it, is the true Spain.",
"title": "Music"
},
{
"paragraph_id": 14,
"text": "During the late 1880s, the strong influence of Spanish style is evident in Albéniz's music. In 1883 Albéniz met the teacher and composer Felipe Pedrell. Pedrell was a leading figure in the development of nationalist Spanish music. In his book The Music of Spain, Gilbert Chase describes Pedrell's influence on Albéniz: \"What Albéniz derived from Pedrell was above all a spiritual orientation, the realization of the wonderful values inherent in Spanish music.\" Felipe Pedrell inspired Albéniz to write Spanish music such as the Suite española, Op. 47, noted for its delicate, intricate melody and abrupt dynamic changes.",
"title": "Music"
},
{
"paragraph_id": 15,
"text": "In addition to the Spanish spirit infused in Albéniz's music, he incorporated other qualities as well. In her biography of Albéniz, Pola Baytelman discerns four characteristics of the music from the middle period as follows:",
"title": "Music"
},
{
"paragraph_id": 16,
"text": "1. The dance rhythms of Spain, of which there are a wide variety. 2. The use of cante jondo, which means deep or profound singing. It is the most serious and moving variety of flamenco or Spanish gypsy song, often dealing with themes of death, anguish, or religion. 3. The use of exotic scales also associated with flamenco music. The Phrygian mode is the most prominent in Albéniz's music, although he also used the Aeolian and Mixolydian modes as well as the whole-tone scale. 4. The transfer of guitar idioms into piano writing.",
"title": "Music"
},
{
"paragraph_id": 17,
"text": "Following his marriage, Albéniz settled in Madrid, Spain and produced a substantial quantity of music in a relatively short period. By 1886 he had written over 50 piano pieces. Albéniz biographer Walter A. Clark says that pieces from this period received enthusiastic reception in the composer's many concerts. Chase describes music from this period,",
"title": "Music"
},
{
"paragraph_id": 18,
"text": "Taking the guitar as his instrumental model, and drawing his inspiration largely from the peculiar traits of Andalusian folk music—but without using actual folk themes—Albéniz achieves a stylization of Spanish traditional idioms that while thoroughly artistic, gives a captivating impression of spontaneous improvisation... Córdoba is the piece that best represents the style of Albéniz in this period, with its hauntingly beautiful melody, set against the acrid dissonances of the plucked accompaniment imitating the notes of the Moorish guslas. Here is the heady scent of jasmines amid the swaying palm trees, the dream fantasy of an Andalusian \"Arabian Nights\" in which Albéniz loved to let his imagination dwell.",
"title": "Music"
},
{
"paragraph_id": 19,
"text": "While Albéniz's crowning achievement, Iberia, was written in the last years of his life in France, many of its preceding works are well-known and of great interest. The five pieces in Chants d'Espagne, (Songs of Spain, published in 1892) are a solid example of the compositional ideas he was exploring in the \"middle period\" of his life. The suite shows what Albéniz biographer Walter Aaron Clark describes as the \"first flowering of his unique creative genius\", and the beginnings of compositional exploration that became the hallmark of his later works. This period also includes his operatic works—Merlin, Henry Clifford, and Pepita Jiménez. His orchestral works of this period include Spanish Rhapsody (1887) and Catalonia (1899), dedicated to Ramon Casas, who had painted his full-length portrait in 1894.",
"title": "Music"
},
{
"paragraph_id": 20,
"text": "As one of the leading composers of his era, Albéniz's influences on both contemporary composers and on the future of Spanish music are profound. As a result of his extended stay in France and the friendship he formed with numerous composers there, his composition technique and harmonic language has influenced aspiring younger composers such as Claude Debussy and Maurice Ravel. His activities as conductor, performer and composer significantly raised the profile of Spanish music abroad and encouraged Spanish music and musicians in his own country.",
"title": "Impact"
},
{
"paragraph_id": 21,
"text": "Albéniz's works have become an important part of the repertoire of the classical guitar, many of which have been transcribed by Francisco Tárrega, Miguel Llobet and others. Asturias (Leyenda) in particular is heard most often on the guitar, as are Granada, Sevilla, Cadiz, Cataluña, Córdoba, Mallorca, and Tango in D. Gordon Crosskey and Cuban-born guitarist Manuel Barrueco have both made solo guitar arrangements of all the eight-movements in Suite española. Selections from Iberia have rarely been attempted on solo guitar but have been very effectively performed by guitar ensembles, such as the performance by John Williams and Julian Bream of Iberia's opening \"Evocation\". The Doors incorporated \"Asturias\" into their song \"Spanish Caravan\"; also, Iron Maiden's \"To Tame a Land\" uses the introduction of the piece for the song bridge. More recently, a guitar version of Granada functions as something of a love theme in Woody Allen's 2008 film Vicky Cristina Barcelona.",
"title": "Impact"
},
{
"paragraph_id": 22,
"text": "The theme from Asturias was incorporated or adapted in several soundtracks including the 2008 horror film Mirrors, composed by Javier Navarrete, and the Netflix TV show Godless, composed by Carlos Rafael Rivera.",
"title": "Impact"
},
{
"paragraph_id": 23,
"text": "In 1997 the Fundación Isaac Albéniz was founded to promote Spanish music and musicians and to act as a research centre for Albéniz and Spanish music in general.",
"title": "Impact"
},
{
"paragraph_id": 24,
"text": "A street in Quito, Ecuador, is named after him.",
"title": "Impact"
},
{
"paragraph_id": 25,
"text": "A film about his life entitled Albéniz was made in 1947. It was produced in Argentina.",
"title": "In film"
},
{
"paragraph_id": 26,
"text": "On 29 May 2010, Google celebrated Isaac Albeniz's 150th Birthday with a doodle.",
"title": "Tributes"
},
{
"paragraph_id": 27,
"text": "References",
"title": "References and sources"
},
{
"paragraph_id": 28,
"text": "Sources",
"title": "References and sources"
}
]
| Isaac Manuel Francisco Albéniz y Pascual was a Spanish virtuoso pianist, composer, and conductor. He is one of the foremost composers of the Post-Romantic era who also had a significant influence on his contemporaries and younger composers. He is best known for his piano works based on Spanish folk music idioms. Isaac Albéniz was close to the Generation of '98. Transcriptions of many of his pieces, such as Asturias (Leyenda), Granada, Sevilla, Cadiz, Córdoba, Cataluña, Mallorca, and Tango in D, are important pieces for classical guitar, though he never composed for the guitar. Some of Albéniz's personal papers of are held in the Library of Catalonia. | 2001-10-30T11:55:14Z | 2023-12-19T09:13:02Z | [
"Template:Short description",
"Template:Use shortened footnotes",
"Template:Reflist",
"Template:Webarchive",
"Template:Portal bar",
"Template:Family name hatnote",
"Template:IPA-es",
"Template:Sfn",
"Template:Cite book",
"Template:In lang",
"Template:Cite web",
"Template:Commons category",
"Template:IMSLP",
"Template:Harvnb",
"Template:OCLC",
"Template:Infobox person",
"Template:Div col",
"Template:Authority control",
"Template:Citation needed",
"Template:Page needed",
"Template:Redirect",
"Template:JSTOR",
"Template:Internet Archive author",
"Template:Isaac Albéniz",
"Template:Use dmy dates",
"Template:Further",
"Template:ISBN",
"Template:Div col end",
"Template:Musical nationalism"
]
| https://en.wikipedia.org/wiki/Isaac_Alb%C3%A9niz |
15,210 | ITU-R | The ITU Radiocommunication Sector (ITU-R) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU) and is responsible for radio communications.
Its role is to manage the international radio-frequency spectrum and satellite orbit resources and to develop standards for radiocommunication systems with the objective of ensuring the effective use of the spectrum.
ITU is required, according to its constitution, to allocate spectrum and register frequency allocation, orbital positions and other parameters of satellites, "in order to avoid harmful interference between radio stations of different countries". The international spectrum management system is therefore based on regulatory procedures for frequency coordination, notification and registration.
ITU-R has a permanent secretariat, the Radiocommunication Bureau, based at the ITU HQ in Geneva, Switzerland. The elected Director of the Bureau is Mr. Mario Maniewicz; he was first elected by the ITU membership to the directorship in 2018.
The CCIR—Comité consultatif international pour la radio, Consultative Committee on International Radio or International Radio Consultative Committee—was founded in 1927.
In 1932 the CCIR and several other organizations (including the original ITU, which had been founded as the International Telegraph Union in 1865) merged to form what would in 1934 become known as the International Telecommunication Union. In 1992, the CCIR became the ITU-R. | [
{
"paragraph_id": 0,
"text": "The ITU Radiocommunication Sector (ITU-R) is one of the three sectors (divisions or units) of the International Telecommunication Union (ITU) and is responsible for radio communications.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Its role is to manage the international radio-frequency spectrum and satellite orbit resources and to develop standards for radiocommunication systems with the objective of ensuring the effective use of the spectrum.",
"title": ""
},
{
"paragraph_id": 2,
"text": "ITU is required, according to its constitution, to allocate spectrum and register frequency allocation, orbital positions and other parameters of satellites, \"in order to avoid harmful interference between radio stations of different countries\". The international spectrum management system is therefore based on regulatory procedures for frequency coordination, notification and registration.",
"title": ""
},
{
"paragraph_id": 3,
"text": "ITU-R has a permanent secretariat, the Radiocommunication Bureau, based at the ITU HQ in Geneva, Switzerland. The elected Director of the Bureau is Mr. Mario Maniewicz; he was first elected by the ITU membership to the directorship in 2018.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The CCIR—Comité consultatif international pour la radio, Consultative Committee on International Radio or International Radio Consultative Committee—was founded in 1927.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1932 the CCIR and several other organizations (including the original ITU, which had been founded as the International Telegraph Union in 1865) merged to form what would in 1934 become known as the International Telecommunication Union. In 1992, the CCIR became the ITU-R.",
"title": "History"
}
]
| The ITU Radiocommunication Sector (ITU-R) is one of the three sectors of the International Telecommunication Union (ITU) and is responsible for radio communications. Its role is to manage the international radio-frequency spectrum and satellite orbit resources and to develop standards for radiocommunication systems with the objective of ensuring the effective use of the spectrum. ITU is required, according to its constitution, to allocate spectrum and register frequency allocation, orbital positions and other parameters of satellites, "in order to avoid harmful interference between radio stations of different countries". The international spectrum management system is therefore based on regulatory procedures for frequency coordination, notification and registration. ITU-R has a permanent secretariat, the Radiocommunication Bureau, based at the ITU HQ in Geneva, Switzerland. The elected Director of the Bureau is Mr. Mario Maniewicz; he was first elected by the ITU membership to the directorship in 2018. | 2001-10-31T01:27:43Z | 2023-11-11T18:16:42Z | [
"Template:Anchor",
"Template:Reflist",
"Template:Official website",
"Template:Telecommunications",
"Template:Public-sector space agencies",
"Template:Authority control",
"Template:One source",
"Template:Infobox United Nations",
"Template:Cite web",
"Template:SMPTE standards",
"Template:Short description"
]
| https://en.wikipedia.org/wiki/ITU-R |
15,214 | Irish Civil War | The Irish Civil War (Irish: Cogadh Cathartha na hÉireann; 28 June 1922 – 24 May 1923) was a conflict that followed the Irish War of Independence and accompanied the establishment of the Irish Free State, an entity independent from the United Kingdom but within the British Empire.
The civil war was waged between the Provisional Government of Ireland and the anti-Treaty Irish Republican Army (1922–1969) (IRA) over the Anglo-Irish Treaty. The Provisional Government (which became the Free State in December 1922) supported the terms of the treaty, while the anti-Treaty opposition saw it as a betrayal of the Irish Republic that had been proclaimed during the Easter Rising of 1916. Many of the combatants had fought together against the British in the Irish Republican Army (1919–1922) during the War of Independence, and had divided after that conflict ended and the treaty negotiations began.
The Civil War was won by the pro-treaty National Army, who first secured Dublin by early July, then went on the offensive against the anti-Treaty strongholds of the south and west, especially the 'Munster Republic', successfully capturing all urban centres by late August. The guerrilla phase of the Irish Civil War lasted another 10 months, before the IRA leadership issued a "dump arms" order to all units, effectively ending the conflict. The National Army benefited from substantial quantities of weapons provided by the British government, particularly artillery and armoured cars.
The conflict left Irish society divided and embittered for generations. Today, the three largest political parties in the Republic of Ireland, Fine Gael, Fianna Fáil, and Sinn Féin are direct descendants of the opposing sides of the war; Fine Gael from the supporters of the pro-Treaty side, Fianna Fáil the party formed from the bulk of the anti-Treaty side by Éamon de Valera, and Sinn Féin, descended from the rump anti-Treaty and irredentist republican party left behind by De Valera's supporters.
The Anglo-Irish Treaty was agreed upon to end the 1919–1921 Irish War of Independence between the Irish Republic and the United Kingdom of Great Britain and Ireland. The treaty provided for a self-governing Irish state, having its own army and police. The Treaty also allowed Northern Ireland (the six north-eastern counties – Fermanagh, Antrim, Tyrone, Londonderry, Armagh and Down – where collectively the majority population was of the Protestant religion) to opt out of the new state and return to the United Kingdom – which it did immediately. With the Partition of Ireland a two-year period of communal conflict took place within the newly formed Northern Ireland. See: The Troubles in Northern Ireland (1920–1922). Rather than creating the independent republic for which nationalists had fought, the Irish Free State would be a dominion of the British Empire with the British monarch as head of state, in the same manner as Canada and Australia. The British suggested dominion status in secret correspondence even before treaty negotiations began, but Sinn Féin leader Éamon de Valera rejected the dominion. The treaty also stipulated that members of the new Irish Oireachtas (parliament) would have to take the following "Oath of Allegiance":
I… do solemnly swear true faith and allegiance to the Constitution of the Irish Free State as by law established, and that I will be faithful to His Majesty King George V, his heirs and successors by law in virtue of the common citizenship of Ireland with Great Britain and her adherence to and membership of the group of nations forming the British Commonwealth of nations.
This oath was highly objectionable to many Irish Republicans. Furthermore, the partition of Ireland, which had already been decided by the Westminster parliament in the Government of Ireland Act 1920, was effectively confirmed in the Anglo-Irish treaty. The most contentious areas of the Treaty for the IRA were the disestablishment of the Irish Republic declared in 1919, the abandonment of the First Dáil, the status of the Irish Free State as a dominion in the British Commonwealth and the British retention of the strategic Treaty Ports on Ireland's south western and north western coasts which were to remain occupied by the Royal Navy. All these issues were the cause of a split in the IRA and ultimately civil war.
Michael Collins, the Irish finance minister and Irish Republican Brotherhood (IRB) president, argued in the Dáil Éireann that the treaty gave "not the ultimate freedom that all nations aspire and develop, but the freedom to achieve freedom". However, those against the treaty believed that it would never deliver full Irish independence.
The split over the Treaty was deeply personal. Many on both sides had been close friends and comrades during the War of Independence. This made their disagreement all the more bitter. On 6 January 1922, at the Mansion House, Dublin, Austin Stack, Home Affairs minister, showed president de Valera the evening news announcing the signing of the Treaty: de Valera merely glanced at it; when Eamonn Duggan, part of the returning Irish delegation, handed him an envelope confirming it, he pushed it aside. De Valera had held secret discussions with UK Prime Minister David Lloyd George from 14 to 21 July in London. Collins, also part of the delegation, supposed (with others) that these discussions confirmed the earlier correspondence, i.e. no British acceptance of a Republic. De Valera, Stack and Defence minister Cathal Brugha had then all refused to join the delegation to London. Collins wrote that his inclusion as a plenipotentiary was "a trap" of de Valera's which he was forewarned of, argued against, but walked into anyway, "as a soldier obeying his commanding officer." Arthur Griffith, the delegation chairman, had made a similar comment about obeying orders to de Valera himself. Mutual suspicion and confusion pertained; the delegation was unclear about the cabinet's instructions and individually became burdened to the point of breakdown. Collins expected the blame for the compromise within the Treaty and wrote: "Early this morning I signed my death warrant." Notwithstanding this, he was frustrated and at times emotional when de Valera and others refused to support the Treaty and friendships died.
Dáil Éireann (the parliament of the Irish Republic) narrowly passed the Anglo-Irish Treaty by 64 votes to 57 on 7 January 1922. Following the Treaty's ratification, in accordance with article 17 of the Treaty, the British-recognised Provisional Government of the Irish Free State was established. Its authority under the Treaty was to provide a "provisional arrangement for the administration of Southern Ireland during the interval" before the establishment of the Irish Free State. In accordance with the Treaty, the British Government transferred "the powers and machinery requisite for the discharge of its duties". Before the British Government transferred such powers, the members of the Provisional Government each "signified in writing [their] acceptance of [the Treaty]".
Upon the Treaty's ratification, de Valera resigned as President of the Republic and failed to be re-elected by an even closer vote of 60–58. He challenged the right of the Dáil to approve the treaty, saying that its members were breaking their oath to the Irish Republic. Meanwhile, he continued to promote a compromise whereby the new Irish Free State would be in "external association" with the British Commonwealth rather than be a member of it (the inclusion of republics within the Commonwealth of Nations was not formally implemented until 1949).
In early March, de Valera formed the Cumann na Poblachta ('Republican Association') party while remaining a member of Sinn Féin, and commenced a speaking tour of the more republican province of Munster on 17 March 1922. During the tour he made controversial speeches at Carrick on Suir, Lismore, Dungarvan and Waterford, saying at one point, "If the Treaty were accepted, the fight for freedom would still go on, and the Irish people, instead of fighting foreign soldiers, will have to fight the Irish soldiers of an Irish government set up by Irishmen." At Thurles several days later he repeated this imagery, and added that the IRA "would have to wade through the blood of the soldiers of the Irish Government, and perhaps through that of some members of the Irish Government to get their freedom."
In a letter to the Irish Independent on 23 March, de Valera accepted the accuracy of their report of his comment about "wading" through blood, but deplored that the newspaper had published it.
More seriously, many Irish Republican Army (IRA) officers were also against the treaty, and in March 1922 an ad hoc Army Convention repudiated the authority of the Dáil to accept the treaty. In contrast, the Minister of Defence, Richard Mulcahy, stated in the Dáil on 28 April that conditions in Dublin had prevented a Convention from being held, but that delegates had been selected and voted by ballot to accept the Oath. The anti-Treaty IRA formed their own "Army Executive", which they declared to be the real government of the country, despite the result of the 1921 general election. On 26 April Mulcahy summarised alleged illegal activities by many IRA men over the previous three months, whom he described as 'seceding volunteers', including hundreds of robberies. Yet this fragmenting army was the only police force on the ground following the disintegration of the Irish Republican Police and the disbanding of the Royal Irish Constabulary (RIC).
By putting ten questions to Mulcahy on 28 April, Seán MacEntee argued that the Army Executive had acted continuously on its own to create a republic since 1917, had an unaltered constitution, had never fallen under the control of the Dáil, and that "the only body competent to dissolve the Volunteer Executive was a duly convened convention of the Irish Republican Army" – not the Dáil. By accepting the treaty in January and abandoning the republic, the Dáil majority had effectively deserted the Army Executive. In his reply, Mulcahy rejected this interpretation. Then, in a debate on defence, MacEntee suggested that supporting the Army Executive "even if it meant the scrapping of the Treaty and terrible and immediate war with England, would be better than the civil war which we are beginning at present apparently". MacEntee's supporters added that the many robberies complained of by Mulcahy on 26 April were caused by the lack of payment and provision by the Dáil to the volunteers.
On 14 April 1922, 200 Anti-Treaty IRA militants, with Rory O'Connor as their spokesman, occupied the Four Courts and several other buildings in central Dublin, resulting in a tense stand-off. These anti-treaty Republicans wanted to spark a new armed confrontation with the British, which they hoped would unite the two factions of the IRA against their common enemy. However, for those who were determined to make the Free State into a viable, self-governing Irish state, this was an act of rebellion that would have to be put down by them rather than the British.
Arthur Griffith was in favour of using force against these men immediately, but Michael Collins, who wanted at all costs to avoid civil war, left the Four Courts garrison alone until late June 1922. By this point, the Pro-Treaty Sinn Féin party had secured a large majority in the general election, along with other parties that supported the Treaty. Collins was also coming under continuing pressure from London to assert his government's authority in Dublin.
Collins established an "army re-unification committee" to re-unite the IRA and organised an election pact with de Valera's anti-treaty political followers to campaign jointly in the Free State's first election in 1922 and form a coalition government afterwards. He also tried to reach a compromise with anti-treaty IRA leaders by agreeing to a republican-type constitution (with no mention of the British monarchy) for the new state. IRA leaders such as Liam Lynch were prepared to accept this compromise. However, the proposal for a republican constitution was vetoed by the British as being contrary to the terms of the treaty and they threatened military intervention in the Free State unless the treaty were fully implemented. Collins reluctantly agreed. This completely undermined the electoral pact between the pro- and anti-treaty factions, who went into the Irish general election on 18 June 1922 as hostile parties, both calling themselves Sinn Féin.
The Pro-Treaty Sinn Féin party won the election with 239,193 votes to 133,864 for Anti-Treaty Sinn Féin. A further 247,226 people voted for other parties, most of whom supported the Treaty. Labour's 132,570 votes were ambiguous with regard to the Treaty. According to Hopkinson, "Irish labour and union leaders, while generally pro-Treaty, made little attempt to lead opinion during the Treaty conflict, casting themselves rather as attempted peacemakers." The election showed that a majority of the Irish electorate accepted the treaty and the foundation of the Irish Free State, but de Valera, his political followers and most of the IRA continued to oppose the treaty. De Valera is quoted as saying, "the majority have no right to do wrong".
Meanwhile, under the leadership of Michael Collins and Arthur Griffith, the pro-treaty Provisional Government set about establishing the Irish Free State, and organised the National Army – to replace the IRA – and a new police force. However, since it was envisaged that the new army would be built around the IRA, Anti-Treaty IRA units were allowed to take over British barracks and take their arms. In practice, this meant that by the summer of 1922, the Provisional Government of Southern Ireland controlled only Dublin and some other areas like County Longford where the IRA units supported the treaty. Fighting ultimately broke out when the Provisional Government tried to assert its authority over well-armed and intransigent Anti-Treaty IRA units around the country – particularly a hardliner group in Dublin.
Field Marshal Henry Hughes Wilson, a prominent security adviser to the Prime Minister of Northern Ireland, James Craig, was shot dead by IRA men on his own doorstep in London on 22 June 1922, with no responsibility for the act being publicly claimed by any IRA authority. Winston Churchill assumed that the Anti-Treaty IRA were responsible for the shooting and warned Collins that he would use British troops to attack the Four Courts unless the Provisional Government took action. In fact, the British cabinet actually resolved to attack the Four Courts themselves on 25 June, in an operation that would have involved tanks, howitzers and aeroplanes. However, on the advice of General Nevil Macready, who commanded the British garrison in Dublin, the plan was cancelled at the last minute. Macready's argument was that British involvement would have united Irish Nationalist opinion against the treaty, and instead Collins was given a last chance to clear the Four Courts himself.
On 26 June anti-treaty forces occupying the Four Courts kidnapped JJ "Ginger" O'Connell, a general in the National Army, in retaliation for the arrest of Leo Henderson. Collins, after giving the Four Courts garrison a final (and according to Ernie O'Malley, only) ultimatum to leave the building on 27 June, decided to end the stand-off by bombarding the Four Courts garrison into surrender. The government then appointed Collins as Commander-in-Chief of the National Army. This attack was not the opening shot of the war, as skirmishes had taken place between pro- and anti-treaty IRA factions throughout the country when the British were handing over the barracks. However, this represented the 'point of no return', when all-out war was effectively declared and the Civil War officially began.
Collins ordered Mulcahy to accept a British offer of two 18-pounder field artillery for use by the new army of the Free State, though General Macready gave just 200 shells of the 10,000 he had in store at Richmond barracks in Inchicore. The anti-treaty forces in the Four Courts, who possessed only small arms, surrendered after three days of bombardment and the storming of the building by Provisional Government troops (28–30 June 1922). Shortly before the surrender, a massive explosion destroyed the western wing of the complex, including the Irish Public Record Office (PRO), injuring many advancing Free State soldiers and destroying the records. Government supporters alleged that the building had been deliberately mined. Historians dispute whether the PRO was intentionally destroyed by mines laid by the Republicans on their evacuation, or whether the explosions occurred when their ammunition store was accidentally ignited by the bombardment. Coogan, however, asserts that two lorry-loads of gelignite was exploded in the PRO, leaving priceless manuscripts floating over the city for several hours afterward.
Pitched battles continued in Dublin until 5 July. IRA units from the Dublin Brigade, led by Oscar Traynor, occupied O'Connell Street – provoking a week's more street fighting and costing another 65 killed and 280 wounded. Among the dead was Republican leader Cathal Brugha, who made his last stand after exiting the Granville Hotel. In addition, the Free State took over 500 Republican prisoners. The civilian casualties are estimated to have numbered well over 250. When the fighting in Dublin died down, the Free State government was left firmly in control of the Irish capital and the anti-treaty forces dispersed around the country, mainly to the south and west.
The outbreak of the Civil War forced pro- and anti-treaty supporters to choose sides. Supporters of the treaty came to be known as "pro-treaty" or Free State Army, legally the National Army, and were often called "Staters" by their opponents. The latter called themselves Republicans and were also known as "anti-treaty" forces or "Irregulars", a term preferred by the Free State side.
The Anti-Treaty IRA claimed that it was defending the Irish Republic declared in 1916 during the Easter Rising, confirmed by the First Dáil and invalidly set aside by those who accepted the compromise of the Free State. Éamon de Valera stated that he would serve as an ordinary IRA volunteer and left the leadership of the anti-treaty Republicans to Liam Lynch, the IRA Chief of Staff. De Valera, though the Republican President as of October 1922, had little control over military operations. The campaign was directed by Liam Lynch until he was killed on 10 April 1923, and then by Frank Aiken from 20 April 1923.
The Civil War split the IRA. When the Civil War broke out, the Anti-Treaty IRA (concentrated in the south and west) outnumbered pro-Free State forces by roughly 12,000 men to 8,000. Moreover, the anti-treaty ranks included many of the IRA's most experienced guerrilla fighters. The paper strength of the IRA in early 1922 was over 72,000 men, but most of them were recruited during the truce with the British and fought in neither the War of Independence nor the Civil War. According to Richard Mulcahy's estimate, the Anti-Treaty IRA at the beginning of the war had 6,780 rifles and 12,900 men.
However, the IRA lacked an effective command structure, a clear strategy and sufficient arms. As well as rifles they had a handful of machine guns and many of their fighters were armed only with shotguns or handguns. They also took a small number of armoured cars from British troops as they were evacuating the country. Finally, they had no artillery of any kind. As a result, they were forced to adopt a defensive stance throughout the war.
By contrast, the Free State government managed to expand its forces dramatically after the start of the war. Collins and his commanders were able to build up an army that could overwhelm their opponents in the field. British supplies of artillery, aircraft, armoured cars, machine guns, small arms and ammunition were of much help to pro-Treaty forces. The British delivered for instance, over 27,000 rifles, 250 machine guns and eight 18-pounder artillery pieces to the pro-treaty forces between the outbreak of the Civil War and September 1922. The National Army amounted to 14,000 men by August 1922, was 38,000 strong by the end of 1922, and by the end of the war had grown to 55,000 men and 3,500 officers, far in excess of what the Irish state would need to maintain in peacetime.
Like the Anti-Treaty IRA, the Free State's National Army was initially rooted in the IRA that fought against the British. Collins' most ruthless officers and men were recruited from the Dublin Active Service Unit (the elite unit of the IRA's Dublin Brigade) and from Collins' Intelligence Department and assassination unit, The Squad. In the new National Army, they were known as the Dublin Guard. Towards the end of the war, they were implicated in some notorious atrocities against anti-treaty guerrillas in County Kerry. Up to the outbreak of Civil War, it had been agreed that only men with service in the IRA could be recruited into the National Army. However, once the war began, all such restrictions were lifted. A 'National Call to Arms' issued on 7 July for recruitment on a six-month basis brought in thousands of new recruits. Many of the new army's recruits were veterans of the British Army in World War I, where they had served in disbanded Irish regiments of the British Army. Many others were raw recruits without any military experience. The fact that at least 50% of the other ranks had no military experience in turn led to ill-discipline becoming a major problem.
A major problem for the National Army was a shortage of experienced officers. At least 20% of its officers had previously served as officers in the British Army, while 50% of the rank-and-file of the National Army had served in the British Army in World War I. Former British Army officers were also recruited for their technical expertise. A number of the senior Free State commanders, such as Emmet Dalton, John T. Prout and W. R. E. Murphy, had seen service as officers in World War I, Dalton and Murphy in the British Army and Prout in the US Army. The Republicans made much use of this fact in their propaganda – claiming that the Free State was only a proxy force for Britain itself. However, the majority of Free State soldiers were raw recruits without military experience, either in World War I or the Irish War of Independence. There were also a significant number of former members of the British Armed Forces on the Republican side, including such senior figures as Tom Barry, David Robinson and Erskine Childers.
With Dublin in pro-treaty hands, conflict spread throughout the country. The war started with the anti-treaty forces holding Cork, Limerick and Waterford as part of a self-styled Munster Republic. However, since the anti-treaty side were not equipped to wage conventional war, Lynch was unable to take advantage of the Republicans' initial advantage in numbers and territory held. He hoped simply to hold the Munster Republic long enough to force Britain to renegotiate the treaty.
The large towns in Ireland were all relatively easily taken by the Free State in August 1922. Collins, Richard Mulcahy and Eoin O'Duffy planned a nationwide Free State offensive, dispatching columns overland to take Limerick in the west and Waterford in the south-east and seaborne forces to take counties Cork and Kerry in the south and Mayo in the west. In the south, landings occurred at Union Hall in Cork and Fenit, the port of Tralee, in Kerry. Limerick fell on 20 July, Waterford on the same day and Cork city on 10 August after a Free State force landed by sea at Passage West. Another seaborne expedition to Mayo in the west secured government control over that part of the country. While in some places the Republicans had put up determined resistance, nowhere were they able to defeat regular forces armed with artillery and armour. The only real conventional battle during the Free State offensive, the Battle of Killmallock, was fought when Free State troops advanced south from Limerick.
Government victories in the major towns inaugurated a period of guerrilla warfare. After the fall of Cork, Lynch ordered IRA units to disperse and form flying columns as they had when fighting the British. They held out in areas such as the western part of counties Cork and Kerry in the south, county Wexford in the east and counties Sligo and Mayo in the west. Sporadic fighting also took place around Dundalk, where Frank Aiken and the Fourth Northern Division of the Irish Republican Army were based, and Dublin, where small-scale but regular attacks were mounted on Free State troops.
August and September 1922 saw widespread attacks on Free State forces in the territories that they had occupied in the July–August offensive, inflicting heavy casualties on them. Collins was killed in an ambush by anti-treaty Republicans at Béal na Bláth, near his home in County Cork, in August 1922. Collins' death increased the bitterness of the Free State leadership towards the Republicans and probably contributed to the subsequent descent of the conflict into a cycle of atrocities and reprisals. Arthur Griffith, the Free State president, had also died of a brain haemorrhage ten days before, leaving the government in the hands of W.T. Cosgrave and the Free State army under the command of General Richard Mulcahy. For a brief period, with rising casualties among its troops and its two principal leaders dead, it looked as if the Free State might collapse. However, as winter set in, the Republicans found it increasingly difficult to sustain their campaign, and casualty rates among National Army troops dropped rapidly. For instance, in County Sligo, 54 people died in the conflict, of whom all but eight had been killed by the end of September.
In the autumn and winter of 1922, Free State forces broke up many of the larger Republican guerrilla units – in Sligo, Meath and Connemara in the west, for example, and in much of Dublin city. Elsewhere, anti-treaty units were forced by lack of supplies and safe-houses to disperse into smaller groups, typically of nine to ten men. Despite these successes for the National Army, it took eight more months of intermittent warfare before the war was brought to an end.
By late 1922 and early 1923, the anti-treaty guerrilla campaign had been reduced largely to acts of sabotage and destruction of public infrastructure such as roads and railways. It was also in this period that the Anti-Treaty IRA began burning the homes of Free State Senators and of many of the Anglo-Irish landed class.
In October 1922, de Valera and the anti-treaty Teachtaí Dála (TDs) set up their own "Republican government" in opposition to the Free State. However, by then the anti-treaty side held no significant territory and de Valera's government had no authority over the population.
On 27 September 1922, three months after the outbreak of war, the Free State's Provisional Government put before the Dáil an Army Emergency Powers Resolution proposing to extend the legislation for setting up military tribunals, transferring some of the Free State's judicial powers over Irish citizens accused of anti-government activities to the Army Council. The legislation, commonly referred to as the "Public Safety Bill", set up and empowered military tribunals to impose life imprisonment, as well as the death penalty, for 'aiding or abetting attacks' on state forces, possession of arms and ammunition or explosive 'without the proper authority' and 'looting destruction or arson'.
The final phase of the Civil War degenerated into a series of atrocities that left a lasting legacy of bitterness in Irish politics. The Free State began executing Republican prisoners on 17 November 1922, when five IRA men were shot by firing squad. They were followed on 24 November by the execution of acclaimed author and treaty negotiator Erskine Childers. In all, out of around 12,000 Republican prisoners taken in the conflict, 81 were officially executed by the Free State.
The Anti-Treaty IRA in reprisal assassinated TD Seán Hales on 7 December 1922. The next day four prominent Republicans held since the first week of the war — Rory O'Connor, Liam Mellows, Richard Barrett and Joe McKelvey — were executed in revenge for the killing of Hales. In addition, Free State troops, particularly in County Kerry, where the guerrilla campaign was most bitter, began the summary execution of captured anti-treaty fighters. The most notorious example of this occurred at Ballyseedy, where nine Republican prisoners were tied to a landmine, which was detonated, killing eight and only leaving one, Stephen Fuller, who was blown clear by the blast, to escape.
The number of "unauthorised" executions of Republican prisoners during the war has been put as high as 153. Among the Republican reprisals were the assassination of Kevin O'Higgins's father and W. T. Cosgrave's uncle in February 1923.
The IRA were unable to maintain an effective guerrilla campaign, given the gradual loss of support. The Catholic Church also supported the Free State, deeming it the lawful government of the country, denouncing the IRA and refusing to administer the Sacraments to anti-treaty fighters. On 10 October 1922, the Catholic Bishops of Ireland issued a formal statement, describing the anti-treaty campaign as:
[A] system of murder and assassination of the National forces without any legitimate authority... the guerrilla warfare now being carried on [by] the Irregulars is without moral sanction and therefore the killing of National soldiers is murder before God, the seizing of public and private property is robbery, the breaking of roads, bridges and railways is criminal. All who in contravention of this teaching, participate in such crimes are guilty of grievous sins and may not be absolved in Confession nor admitted to the Holy Communion if they persist in such evil courses.
The Church's support for the Free State aroused bitter hostility among some republicans. Although the Catholic Church in independent Ireland has often been seen as a triumphalist Church, a recent study has found that it felt deeply insecure after these events.
By early 1923, the offensive capability of the IRA had been seriously eroded and when, in February 1923, the Republican leader Liam Deasy was captured by Free State forces, he called on the republicans to end their campaign and reach an accommodation with the Free State. The State's executions of anti-treaty prisoners, 34 of whom were shot in January 1923, also took its toll on the Republicans' morale.
In addition, the National Army's operations in the field were slowly but steadily breaking up the remaining Republican concentrations.
March and April 1923 saw this progressive dismemberment of the Republican forces continue with the capture and sometimes killing of guerrilla columns. A National Army report of 11 April stated, "Events of the last few days point to the beginning of the end as a far as the irregular campaign is concerned".
As the conflict petered out into a de facto victory for the pro-treaty side, de Valera asked the IRA leadership to call a ceasefire, but they refused. The Anti-Treaty IRA executive met on 26 March in County Waterford to discuss the war's future. Tom Barry proposed a motion to end the war, but it was defeated by 6 votes to 5. Éamon de Valera was allowed to attend, after some debate, but was given no voting rights.
Lynch, the Republican leader, was killed in a skirmish in the Knockmealdown Mountains in County Tipperary on 10 April. The National Army had extracted information from Republican prisoners in Dublin that the IRA Executive was in the area and as well as killing Lynch, they also captured senior anti-treaty IRA officers Dan Breen, Todd Andrews, Seán Gaynor and Frank Barrett in the operation.
It is often suggested by historians including Professor Michael Laffan of University College Dublin, that the death of Lynch allowed the more pragmatic Frank Aiken, who took over as IRA Chief of Staff, to call a halt to what seemed a futile struggle. Aiken's accession to IRA leadership was followed on 30 April by the declaration of a suspension of military activities; on 24 May 1923, he issued a ceasefire order to IRA volunteers. They were to dump arms rather than surrender them or continue a fight that they were incapable of winning.
Éamon de Valera supported the order, issuing a statement to Anti-Treaty fighters on 24 May:
Soldiers of the Republic. Legion of the Rearguard: The Republic can no longer be defended successfully by your arms. Further sacrifice of life would now be in vain and the continuance of the struggle in arms unwise in the national interest and prejudicial to the future of our cause. Military victory must be allowed to rest for the moment with those who have destroyed the Republic.
The Free State government had started peace negotiations in early May, which broke down. The High Court of Justice in Ireland ruled on 31 July 1923 that a state of war no longer existed, and consequently the internment of Republicans, permitted under common law only in wartime, was now illegal. Without a formal peace, holding 13,000 prisoners and worried that fighting could break out again at any time, the government enacted two Public Safety (Emergency Powers) Acts on 1 and 3 August 1923, to permit continued internment and other measures. Thousands of Anti-Treaty IRA members (including de Valera on 15 August) were arrested by the Free State forces in the weeks and months after the end of the war, when they had dumped their arms and returned home.
A general election was held on 27 August 1923, which Cumann na nGaedheal, the pro-Free State party, won with about 40% of the first-preference vote. The Republicans, represented by Sinn Féin, won about 27% of the vote. Many of their candidates and supporters were still imprisoned before, during and after the election.
In October 1923, around 8,000 of the 12,000 Republican prisoners in Free State gaols went on a hunger strike. The strike lasted for 41 days and met little success (among those who died were Denny Barry, Joseph Whitty and Andy O'Sullivan) see: 1923 Irish Hunger Strikes. However, most of the women prisoners were released shortly thereafter and the hunger strike helped concentrate the Republican movement on the prisoners and their associated organisations. In July, de Valera had recognised the Republican political interests lay with the prisoners and went so far as to say:
The whole future of our cause and of the nation depends in my opinion upon the spirit of the prisoners in the camps and in the jails. You are the repositories of the NATIONAL FAITH AND WILL
Although the cause of the Civil War was the Treaty, as the war developed the anti-treaty forces sought to identify their actions with the traditional Republican cause of the "men of no property" and the result was that large Anglo-Irish landowners and some less well-off Southern Unionists were attacked. A total of 192 "stately homes" of the old landed class and of Free State politicians were destroyed by anti-treaty forces during the war.
The stated reason for such attacks was that some landowners had become Free State senators. In October 1922, a deputation of Southern Unionists met W. T. Cosgrave to offer their support to the Free State and some of them had received positions in the State's Upper house or Senate. Among the prominent senators whose homes were attacked were: Palmerstown House near Naas, which belonged to the Earl of Mayo, Moore Hall in Mayo, Horace Plunkett (who had helped to establish the rural co-operative schemes), and Senator Henry Guinness (which was unsuccessful). Also burned was Marlfield House in Clonmel, the home of Senator John Philip Bagwell, with its extensive library of historical documents. Bagwell was kidnapped and held in the Dublin Mountains, but later released when reprisals were threatened.
However, in addition to their allegiance to the Free State, there were also other factors behind Republican animosity towards the old landed class. Many, but not all of these people, had supported the Crown forces during the War of Independence. This support was often largely moral, but sometimes it took the form of actively assisting the British in the conflict. Such attacks should have ended with the Truce of 11 July 1921, but they continued after the truce and escalated during the Civil War. In July 1922, Con Moloney, the IRA Adjutant General, ordered that unionist property should be seized to accommodate their men. The "worst spell" of attacks on former unionist property came in the early months of 1923, 37 "big houses" being burnt in January and February alone.
Though the Land Purchase (Ireland) Act 1903 allowed tenants to buy land from their landlords, some small farmers, particularly in Mayo and Galway, simply occupied land belonging to political opponents during this period when the RIC had ceased to function. In 1919, senior Sinn Féin officials were sufficiently concerned at this unilateral action that they instituted Arbitration Courts to adjudicate disputes. Sometimes these attacks had sectarian overtones, although most IRA men made no distinction between Catholic and Protestant supporters of the Irish government.
The IRA burnt an orphanage housing Protestant boys near Clifden, County Galway in June 1922, on the ground that it was "pro-British". The 60 orphans were taken to Devonport on board a Royal Navy destroyer.
Controversy continues to this day about the extent of intimidation of Protestants at this time. Many left Ireland during and after the Civil War. Dr Andy Bielenberg of UCC considers that about 41,000 who were not linked to the former British administration left Southern Ireland (which became the Irish Free State) between 1919 and 1923. He has found that a "high-water mark" of this 41,000 left between 1921 and 1923. In all, from 1911 to 1926, the Protestant population of the 26 counties fell from some 10.4% of the total population to 7.4%.
The Civil War attracted international attention which led to various groups expressing support and opposition to the anti-treaty side. The Communist Party of Great Britain in its journal The Communist wrote "The proletarians of the IRA have the future of Ireland in their hands. If the Irish Labour Party would only dare! A mass movement of the Irish workers in alliance with the IRA could establish a Workers' Republic now". They were also supported by the Communist International (Comintern) which on 3 January 1923 passed a resolution stating it "sends fraternal greetings to the struggling Irish national revolutionaries and feels assured that they will soon tread the only path that leads to real freedom – the path of Communism. The CI will assist all efforts to organise the struggle to combat this terror and to help the Irish workers and peasants to victory."
The majority of Irish-Americans supported the treaty, including those in Clann na Gael and Friends of Irish Freedom. However anti-treaty republicans had control of what was left of Clann na Gael and the American Association for the Recognition of the Irish Republic so they supported the anti-treaty side during the war.
The Civil War, though short, was bloody. It cost the lives of many public figures, including Michael Collins, Cathal Brugha, Arthur Griffith and Liam Lynch. Both sides carried out brutal acts: the anti-treaty forces killed a TD and several other pro-Treaty politicians and burned many homes of senators and Free State supporters, while the government executed anti-treaty prisoners, officially and unofficially.
Precise figures for the dead and wounded have yet to be calculated. The pro-treaty forces suffered between 800 and 1000 fatalities from all causes. It has been suggested that the anti-treaty forces' death toll was higher. but the Republican roll of honour, compiled in the 1920s lists 426 anti-Treaty IRA Volunteers killed between January 1922 and April 1924. The most recent county-by-county research suggests a death toll of just under 2,000. For total combatant and civilian deaths, a minimum of 1,500 and a maximum of 4,000 have been suggested, though the latter figure is now generally estimated to be too high.
The Garda Síochána (new police force) was not involved in the war, which meant that it was well placed to develop into an unarmed and politically neutral police service after the war. It had been disarmed by the Government in order to win public confidence in June–September 1922 and in December 1922, the IRA issued a General Order not to fire on the Civil Guard. The Criminal Investigation Department, or CID, a 350-strong, armed, plain-clothed Police Corps that had been established during the conflict for the purposes of counter-insurgency, was disbanded in October 1923, shortly after the conflict's end.
The economic costs of the war were also high. As their forces abandoned their fixed positions in July–August 1922, the Republicans burned many of the administrative buildings and businesses that they had been occupying. In addition, their subsequent guerrilla campaign caused much destruction, and the economy of the Free State suffered a hard blow in the earliest days of its existence, as a result. The material damage caused by the war to property in the Free State has been estimated to be in the region of £50 million in 1922. This is equivalent to about £2.1 billion, or €2.4 billion worth of damage in 2022 values.
Particularly damaging to the Free State's economy was the systematic destruction of railway infrastructure and roads by the Republicans. In addition, the cost to the Free State of waging the war came to another £17 million (£718m or €883m in 2022 values). By September 1923, Deputy Hogan estimated the cost at £50 million. The new State ended 1923 with a budget deficit of over £4 million (£168m or €196m in 2022 values). This weakened financial situation meant that the new state could not pay its share of Imperial debt under the treaty. This adversely affected the boundary negotiations in 1924–25, in which the Free State government acquiesced that border with Northern Ireland would remain unchanged in exchange for forgiveness of the Imperial debt. Further, the state undertook to pay for damage caused to property between the truce of July 1921 and the end of the Civil War; W. T. Cosgrave told the Dáil:
Every Deputy in this House is aware of the complaint which has been made that the measure of compensation for post-Truce damage compares unfavourably with the awards for damage suffered pre-Truce.
The fact that the Irish Civil War was fought between Irish Nationalist factions meant that the sporadic conflict in Northern Ireland ended. Collins and Sir James Craig signed an agreement to end it on 30 March 1922, but, despite this, Collins covertly supplied arms to the Northern IRA until a week before his death in August 1922. Because of the Irish Civil War, Northern Ireland was able to consolidate its existence and the partition of Ireland was confirmed for the foreseeable future. The continuing war also confirmed the northern Unionists' existing stance against the ethos of all shades of nationalism. This might have led to open hostilities between North and South had the Irish Civil War not broken out. Indeed, the Ulster Special Constabulary (the "B-Specials") that had been established in 1920 (on the foundation of Northern Ireland) was expanded in 1922 rather than being demobilised.
In the event, it was only well after their defeat in the Civil War that anti-treaty Irish Republicans seriously considered whether to take armed action against British rule in Northern Ireland (the first serious suggestion to do this came in the late 1930s). The northern units of the IRA largely supported the Free State side in the Civil War because of Collins's policies, and over 500 of them joined the new Free State's National Army.
The cost of the war and the budget deficit it caused was a difficulty for the new Free State and affected the Boundary Commission negotiations of 1925, which were to determine the border with Northern Ireland. The Free State agreed to waive its claim to predominantly Nationalist areas in Northern Ireland and in return its agreed share of the Imperial debt under the 1921 Treaty was not paid.
In 1926, having failed to persuade the majority of the Anti-Treaty IRA or the anti-treaty party of Sinn Féin to accept the new status quo as a basis for an evolving Republic, a large faction led by de Valera and Aiken left to resume constitutional politics and to found the Fianna Fáil party. Whereas Fianna Fáil was to become the dominant party in Irish politics, Sinn Féin became a small, isolated political party. The IRA, then much more numerous and influential than Sinn Féin, remained associated with Fianna Fáil (though not directly) until banned by de Valera in 1935.
In 1927, Fianna Fáil members took the Oath of Allegiance and entered the Dáil, effectively recognising the legitimacy of the Free State. The Free State was already moving towards independence by this point. Under the Statute of Westminster 1931, the British Parliament gave up its right to legislate for members of the British Commonwealth. When elected to power in 1932, Fianna Fáil under de Valera set about dismantling what they considered to be objectionable features of the treaty, abolishing the Oath of Allegiance, removing the power of the Office of Governor General (British representative in Ireland) and abolishing the Senate, which was dominated by former Unionists and pro-treaty Nationalists. In 1937, they passed a new constitution, which made a President the head of state, did not mention any allegiance to the British monarch, and which included a territorial claim to Northern Ireland. The following year, Britain returned without conditions the seaports that it had kept under the terms of the treaty. When the Second World War broke out in 1939, the state was able to demonstrate its independence by remaining neutral throughout the war, although Dublin did to some extent tacitly support the Allies. Finally, in 1948, a coalition government, containing elements of both sides in the Civil War (pro-treaty Fine Gael and anti-treaty Clann na Poblachta) left the British Commonwealth and described the state as the Republic of Ireland. By the 1950s, the issues over which the Civil War had been fought were largely settled.
As with most civil wars, the internecine conflict left a bitter legacy, which continues to influence Irish politics to this day. The two largest political parties in the republic through most of its history (except for the 2011 and 2020 general elections) were Fianna Fáil and Fine Gael, the descendants respectively of the anti-treaty and pro-treaty forces of 1922. Until the 1970s, almost all of Ireland's prominent politicians were veterans of the Civil War, a fact which poisoned the relationship between Ireland's two biggest parties. Examples of Civil War veterans include: Republicans Éamon de Valera, Frank Aiken, Todd Andrews and Seán Lemass; and Free State supporters W. T. Cosgrave, Richard Mulcahy and Kevin O'Higgins.
Moreover, many of these men's sons and daughters also became politicians, meaning that the personal wounds of the civil war were felt over three generations. In the 1930s, after Fianna Fáil took power for the first time, it looked possible for a while that the Civil War might break out again between the IRA and the pro-Free State Blueshirts. Fortunately, this crisis was averted, and by the 1950s violence was no longer prominent in politics in the Republic of Ireland. However, the breakaway IRA continued (and continues in various forms) to exist. It was not until 1948 that the IRA renounced military attacks on the forces of the southern Irish state when it became the Republic of Ireland. After this point, the organisation dedicated itself primarily to the end of British rule in Northern Ireland. The IRA Army Council still makes claim to be the legitimate Provisional Government of the Irish Republic declared in 1916 and annulled by the Anglo-Irish Treaty of 1921.
According to Edward Quinn, the play "Juno and the Paycock" by Seán O'Casey is a tragicomedy that criticizes the civil war and the foolishness that led to it. Irish writer James Stephens says the play's theme is an "orchestrated hymn against all poverty and hate." | [
{
"paragraph_id": 0,
"text": "The Irish Civil War (Irish: Cogadh Cathartha na hÉireann; 28 June 1922 – 24 May 1923) was a conflict that followed the Irish War of Independence and accompanied the establishment of the Irish Free State, an entity independent from the United Kingdom but within the British Empire.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The civil war was waged between the Provisional Government of Ireland and the anti-Treaty Irish Republican Army (1922–1969) (IRA) over the Anglo-Irish Treaty. The Provisional Government (which became the Free State in December 1922) supported the terms of the treaty, while the anti-Treaty opposition saw it as a betrayal of the Irish Republic that had been proclaimed during the Easter Rising of 1916. Many of the combatants had fought together against the British in the Irish Republican Army (1919–1922) during the War of Independence, and had divided after that conflict ended and the treaty negotiations began.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Civil War was won by the pro-treaty National Army, who first secured Dublin by early July, then went on the offensive against the anti-Treaty strongholds of the south and west, especially the 'Munster Republic', successfully capturing all urban centres by late August. The guerrilla phase of the Irish Civil War lasted another 10 months, before the IRA leadership issued a \"dump arms\" order to all units, effectively ending the conflict. The National Army benefited from substantial quantities of weapons provided by the British government, particularly artillery and armoured cars.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The conflict left Irish society divided and embittered for generations. Today, the three largest political parties in the Republic of Ireland, Fine Gael, Fianna Fáil, and Sinn Féin are direct descendants of the opposing sides of the war; Fine Gael from the supporters of the pro-Treaty side, Fianna Fáil the party formed from the bulk of the anti-Treaty side by Éamon de Valera, and Sinn Féin, descended from the rump anti-Treaty and irredentist republican party left behind by De Valera's supporters.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Anglo-Irish Treaty was agreed upon to end the 1919–1921 Irish War of Independence between the Irish Republic and the United Kingdom of Great Britain and Ireland. The treaty provided for a self-governing Irish state, having its own army and police. The Treaty also allowed Northern Ireland (the six north-eastern counties – Fermanagh, Antrim, Tyrone, Londonderry, Armagh and Down – where collectively the majority population was of the Protestant religion) to opt out of the new state and return to the United Kingdom – which it did immediately. With the Partition of Ireland a two-year period of communal conflict took place within the newly formed Northern Ireland. See: The Troubles in Northern Ireland (1920–1922). Rather than creating the independent republic for which nationalists had fought, the Irish Free State would be a dominion of the British Empire with the British monarch as head of state, in the same manner as Canada and Australia. The British suggested dominion status in secret correspondence even before treaty negotiations began, but Sinn Féin leader Éamon de Valera rejected the dominion. The treaty also stipulated that members of the new Irish Oireachtas (parliament) would have to take the following \"Oath of Allegiance\":",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "I… do solemnly swear true faith and allegiance to the Constitution of the Irish Free State as by law established, and that I will be faithful to His Majesty King George V, his heirs and successors by law in virtue of the common citizenship of Ireland with Great Britain and her adherence to and membership of the group of nations forming the British Commonwealth of nations.",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "This oath was highly objectionable to many Irish Republicans. Furthermore, the partition of Ireland, which had already been decided by the Westminster parliament in the Government of Ireland Act 1920, was effectively confirmed in the Anglo-Irish treaty. The most contentious areas of the Treaty for the IRA were the disestablishment of the Irish Republic declared in 1919, the abandonment of the First Dáil, the status of the Irish Free State as a dominion in the British Commonwealth and the British retention of the strategic Treaty Ports on Ireland's south western and north western coasts which were to remain occupied by the Royal Navy. All these issues were the cause of a split in the IRA and ultimately civil war.",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "Michael Collins, the Irish finance minister and Irish Republican Brotherhood (IRB) president, argued in the Dáil Éireann that the treaty gave \"not the ultimate freedom that all nations aspire and develop, but the freedom to achieve freedom\". However, those against the treaty believed that it would never deliver full Irish independence.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "The split over the Treaty was deeply personal. Many on both sides had been close friends and comrades during the War of Independence. This made their disagreement all the more bitter. On 6 January 1922, at the Mansion House, Dublin, Austin Stack, Home Affairs minister, showed president de Valera the evening news announcing the signing of the Treaty: de Valera merely glanced at it; when Eamonn Duggan, part of the returning Irish delegation, handed him an envelope confirming it, he pushed it aside. De Valera had held secret discussions with UK Prime Minister David Lloyd George from 14 to 21 July in London. Collins, also part of the delegation, supposed (with others) that these discussions confirmed the earlier correspondence, i.e. no British acceptance of a Republic. De Valera, Stack and Defence minister Cathal Brugha had then all refused to join the delegation to London. Collins wrote that his inclusion as a plenipotentiary was \"a trap\" of de Valera's which he was forewarned of, argued against, but walked into anyway, \"as a soldier obeying his commanding officer.\" Arthur Griffith, the delegation chairman, had made a similar comment about obeying orders to de Valera himself. Mutual suspicion and confusion pertained; the delegation was unclear about the cabinet's instructions and individually became burdened to the point of breakdown. Collins expected the blame for the compromise within the Treaty and wrote: \"Early this morning I signed my death warrant.\" Notwithstanding this, he was frustrated and at times emotional when de Valera and others refused to support the Treaty and friendships died.",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "Dáil Éireann (the parliament of the Irish Republic) narrowly passed the Anglo-Irish Treaty by 64 votes to 57 on 7 January 1922. Following the Treaty's ratification, in accordance with article 17 of the Treaty, the British-recognised Provisional Government of the Irish Free State was established. Its authority under the Treaty was to provide a \"provisional arrangement for the administration of Southern Ireland during the interval\" before the establishment of the Irish Free State. In accordance with the Treaty, the British Government transferred \"the powers and machinery requisite for the discharge of its duties\". Before the British Government transferred such powers, the members of the Provisional Government each \"signified in writing [their] acceptance of [the Treaty]\".",
"title": "Background"
},
{
"paragraph_id": 10,
"text": "Upon the Treaty's ratification, de Valera resigned as President of the Republic and failed to be re-elected by an even closer vote of 60–58. He challenged the right of the Dáil to approve the treaty, saying that its members were breaking their oath to the Irish Republic. Meanwhile, he continued to promote a compromise whereby the new Irish Free State would be in \"external association\" with the British Commonwealth rather than be a member of it (the inclusion of republics within the Commonwealth of Nations was not formally implemented until 1949).",
"title": "Background"
},
{
"paragraph_id": 11,
"text": "In early March, de Valera formed the Cumann na Poblachta ('Republican Association') party while remaining a member of Sinn Féin, and commenced a speaking tour of the more republican province of Munster on 17 March 1922. During the tour he made controversial speeches at Carrick on Suir, Lismore, Dungarvan and Waterford, saying at one point, \"If the Treaty were accepted, the fight for freedom would still go on, and the Irish people, instead of fighting foreign soldiers, will have to fight the Irish soldiers of an Irish government set up by Irishmen.\" At Thurles several days later he repeated this imagery, and added that the IRA \"would have to wade through the blood of the soldiers of the Irish Government, and perhaps through that of some members of the Irish Government to get their freedom.\"",
"title": "Background"
},
{
"paragraph_id": 12,
"text": "In a letter to the Irish Independent on 23 March, de Valera accepted the accuracy of their report of his comment about \"wading\" through blood, but deplored that the newspaper had published it.",
"title": "Background"
},
{
"paragraph_id": 13,
"text": "More seriously, many Irish Republican Army (IRA) officers were also against the treaty, and in March 1922 an ad hoc Army Convention repudiated the authority of the Dáil to accept the treaty. In contrast, the Minister of Defence, Richard Mulcahy, stated in the Dáil on 28 April that conditions in Dublin had prevented a Convention from being held, but that delegates had been selected and voted by ballot to accept the Oath. The anti-Treaty IRA formed their own \"Army Executive\", which they declared to be the real government of the country, despite the result of the 1921 general election. On 26 April Mulcahy summarised alleged illegal activities by many IRA men over the previous three months, whom he described as 'seceding volunteers', including hundreds of robberies. Yet this fragmenting army was the only police force on the ground following the disintegration of the Irish Republican Police and the disbanding of the Royal Irish Constabulary (RIC).",
"title": "Background"
},
{
"paragraph_id": 14,
"text": "By putting ten questions to Mulcahy on 28 April, Seán MacEntee argued that the Army Executive had acted continuously on its own to create a republic since 1917, had an unaltered constitution, had never fallen under the control of the Dáil, and that \"the only body competent to dissolve the Volunteer Executive was a duly convened convention of the Irish Republican Army\" – not the Dáil. By accepting the treaty in January and abandoning the republic, the Dáil majority had effectively deserted the Army Executive. In his reply, Mulcahy rejected this interpretation. Then, in a debate on defence, MacEntee suggested that supporting the Army Executive \"even if it meant the scrapping of the Treaty and terrible and immediate war with England, would be better than the civil war which we are beginning at present apparently\". MacEntee's supporters added that the many robberies complained of by Mulcahy on 26 April were caused by the lack of payment and provision by the Dáil to the volunteers.",
"title": "Background"
},
{
"paragraph_id": 15,
"text": "On 14 April 1922, 200 Anti-Treaty IRA militants, with Rory O'Connor as their spokesman, occupied the Four Courts and several other buildings in central Dublin, resulting in a tense stand-off. These anti-treaty Republicans wanted to spark a new armed confrontation with the British, which they hoped would unite the two factions of the IRA against their common enemy. However, for those who were determined to make the Free State into a viable, self-governing Irish state, this was an act of rebellion that would have to be put down by them rather than the British.",
"title": "Background"
},
{
"paragraph_id": 16,
"text": "Arthur Griffith was in favour of using force against these men immediately, but Michael Collins, who wanted at all costs to avoid civil war, left the Four Courts garrison alone until late June 1922. By this point, the Pro-Treaty Sinn Féin party had secured a large majority in the general election, along with other parties that supported the Treaty. Collins was also coming under continuing pressure from London to assert his government's authority in Dublin.",
"title": "Background"
},
{
"paragraph_id": 17,
"text": "Collins established an \"army re-unification committee\" to re-unite the IRA and organised an election pact with de Valera's anti-treaty political followers to campaign jointly in the Free State's first election in 1922 and form a coalition government afterwards. He also tried to reach a compromise with anti-treaty IRA leaders by agreeing to a republican-type constitution (with no mention of the British monarchy) for the new state. IRA leaders such as Liam Lynch were prepared to accept this compromise. However, the proposal for a republican constitution was vetoed by the British as being contrary to the terms of the treaty and they threatened military intervention in the Free State unless the treaty were fully implemented. Collins reluctantly agreed. This completely undermined the electoral pact between the pro- and anti-treaty factions, who went into the Irish general election on 18 June 1922 as hostile parties, both calling themselves Sinn Féin.",
"title": "Background"
},
{
"paragraph_id": 18,
"text": "The Pro-Treaty Sinn Féin party won the election with 239,193 votes to 133,864 for Anti-Treaty Sinn Féin. A further 247,226 people voted for other parties, most of whom supported the Treaty. Labour's 132,570 votes were ambiguous with regard to the Treaty. According to Hopkinson, \"Irish labour and union leaders, while generally pro-Treaty, made little attempt to lead opinion during the Treaty conflict, casting themselves rather as attempted peacemakers.\" The election showed that a majority of the Irish electorate accepted the treaty and the foundation of the Irish Free State, but de Valera, his political followers and most of the IRA continued to oppose the treaty. De Valera is quoted as saying, \"the majority have no right to do wrong\".",
"title": "Background"
},
{
"paragraph_id": 19,
"text": "Meanwhile, under the leadership of Michael Collins and Arthur Griffith, the pro-treaty Provisional Government set about establishing the Irish Free State, and organised the National Army – to replace the IRA – and a new police force. However, since it was envisaged that the new army would be built around the IRA, Anti-Treaty IRA units were allowed to take over British barracks and take their arms. In practice, this meant that by the summer of 1922, the Provisional Government of Southern Ireland controlled only Dublin and some other areas like County Longford where the IRA units supported the treaty. Fighting ultimately broke out when the Provisional Government tried to assert its authority over well-armed and intransigent Anti-Treaty IRA units around the country – particularly a hardliner group in Dublin.",
"title": "Background"
},
{
"paragraph_id": 20,
"text": "Field Marshal Henry Hughes Wilson, a prominent security adviser to the Prime Minister of Northern Ireland, James Craig, was shot dead by IRA men on his own doorstep in London on 22 June 1922, with no responsibility for the act being publicly claimed by any IRA authority. Winston Churchill assumed that the Anti-Treaty IRA were responsible for the shooting and warned Collins that he would use British troops to attack the Four Courts unless the Provisional Government took action. In fact, the British cabinet actually resolved to attack the Four Courts themselves on 25 June, in an operation that would have involved tanks, howitzers and aeroplanes. However, on the advice of General Nevil Macready, who commanded the British garrison in Dublin, the plan was cancelled at the last minute. Macready's argument was that British involvement would have united Irish Nationalist opinion against the treaty, and instead Collins was given a last chance to clear the Four Courts himself.",
"title": "Background"
},
{
"paragraph_id": 21,
"text": "On 26 June anti-treaty forces occupying the Four Courts kidnapped JJ \"Ginger\" O'Connell, a general in the National Army, in retaliation for the arrest of Leo Henderson. Collins, after giving the Four Courts garrison a final (and according to Ernie O'Malley, only) ultimatum to leave the building on 27 June, decided to end the stand-off by bombarding the Four Courts garrison into surrender. The government then appointed Collins as Commander-in-Chief of the National Army. This attack was not the opening shot of the war, as skirmishes had taken place between pro- and anti-treaty IRA factions throughout the country when the British were handing over the barracks. However, this represented the 'point of no return', when all-out war was effectively declared and the Civil War officially began.",
"title": "Course of the war"
},
{
"paragraph_id": 22,
"text": "Collins ordered Mulcahy to accept a British offer of two 18-pounder field artillery for use by the new army of the Free State, though General Macready gave just 200 shells of the 10,000 he had in store at Richmond barracks in Inchicore. The anti-treaty forces in the Four Courts, who possessed only small arms, surrendered after three days of bombardment and the storming of the building by Provisional Government troops (28–30 June 1922). Shortly before the surrender, a massive explosion destroyed the western wing of the complex, including the Irish Public Record Office (PRO), injuring many advancing Free State soldiers and destroying the records. Government supporters alleged that the building had been deliberately mined. Historians dispute whether the PRO was intentionally destroyed by mines laid by the Republicans on their evacuation, or whether the explosions occurred when their ammunition store was accidentally ignited by the bombardment. Coogan, however, asserts that two lorry-loads of gelignite was exploded in the PRO, leaving priceless manuscripts floating over the city for several hours afterward.",
"title": "Course of the war"
},
{
"paragraph_id": 23,
"text": "Pitched battles continued in Dublin until 5 July. IRA units from the Dublin Brigade, led by Oscar Traynor, occupied O'Connell Street – provoking a week's more street fighting and costing another 65 killed and 280 wounded. Among the dead was Republican leader Cathal Brugha, who made his last stand after exiting the Granville Hotel. In addition, the Free State took over 500 Republican prisoners. The civilian casualties are estimated to have numbered well over 250. When the fighting in Dublin died down, the Free State government was left firmly in control of the Irish capital and the anti-treaty forces dispersed around the country, mainly to the south and west.",
"title": "Course of the war"
},
{
"paragraph_id": 24,
"text": "The outbreak of the Civil War forced pro- and anti-treaty supporters to choose sides. Supporters of the treaty came to be known as \"pro-treaty\" or Free State Army, legally the National Army, and were often called \"Staters\" by their opponents. The latter called themselves Republicans and were also known as \"anti-treaty\" forces or \"Irregulars\", a term preferred by the Free State side.",
"title": "Course of the war"
},
{
"paragraph_id": 25,
"text": "The Anti-Treaty IRA claimed that it was defending the Irish Republic declared in 1916 during the Easter Rising, confirmed by the First Dáil and invalidly set aside by those who accepted the compromise of the Free State. Éamon de Valera stated that he would serve as an ordinary IRA volunteer and left the leadership of the anti-treaty Republicans to Liam Lynch, the IRA Chief of Staff. De Valera, though the Republican President as of October 1922, had little control over military operations. The campaign was directed by Liam Lynch until he was killed on 10 April 1923, and then by Frank Aiken from 20 April 1923.",
"title": "Course of the war"
},
{
"paragraph_id": 26,
"text": "The Civil War split the IRA. When the Civil War broke out, the Anti-Treaty IRA (concentrated in the south and west) outnumbered pro-Free State forces by roughly 12,000 men to 8,000. Moreover, the anti-treaty ranks included many of the IRA's most experienced guerrilla fighters. The paper strength of the IRA in early 1922 was over 72,000 men, but most of them were recruited during the truce with the British and fought in neither the War of Independence nor the Civil War. According to Richard Mulcahy's estimate, the Anti-Treaty IRA at the beginning of the war had 6,780 rifles and 12,900 men.",
"title": "Course of the war"
},
{
"paragraph_id": 27,
"text": "However, the IRA lacked an effective command structure, a clear strategy and sufficient arms. As well as rifles they had a handful of machine guns and many of their fighters were armed only with shotguns or handguns. They also took a small number of armoured cars from British troops as they were evacuating the country. Finally, they had no artillery of any kind. As a result, they were forced to adopt a defensive stance throughout the war.",
"title": "Course of the war"
},
{
"paragraph_id": 28,
"text": "By contrast, the Free State government managed to expand its forces dramatically after the start of the war. Collins and his commanders were able to build up an army that could overwhelm their opponents in the field. British supplies of artillery, aircraft, armoured cars, machine guns, small arms and ammunition were of much help to pro-Treaty forces. The British delivered for instance, over 27,000 rifles, 250 machine guns and eight 18-pounder artillery pieces to the pro-treaty forces between the outbreak of the Civil War and September 1922. The National Army amounted to 14,000 men by August 1922, was 38,000 strong by the end of 1922, and by the end of the war had grown to 55,000 men and 3,500 officers, far in excess of what the Irish state would need to maintain in peacetime.",
"title": "Course of the war"
},
{
"paragraph_id": 29,
"text": "Like the Anti-Treaty IRA, the Free State's National Army was initially rooted in the IRA that fought against the British. Collins' most ruthless officers and men were recruited from the Dublin Active Service Unit (the elite unit of the IRA's Dublin Brigade) and from Collins' Intelligence Department and assassination unit, The Squad. In the new National Army, they were known as the Dublin Guard. Towards the end of the war, they were implicated in some notorious atrocities against anti-treaty guerrillas in County Kerry. Up to the outbreak of Civil War, it had been agreed that only men with service in the IRA could be recruited into the National Army. However, once the war began, all such restrictions were lifted. A 'National Call to Arms' issued on 7 July for recruitment on a six-month basis brought in thousands of new recruits. Many of the new army's recruits were veterans of the British Army in World War I, where they had served in disbanded Irish regiments of the British Army. Many others were raw recruits without any military experience. The fact that at least 50% of the other ranks had no military experience in turn led to ill-discipline becoming a major problem.",
"title": "Course of the war"
},
{
"paragraph_id": 30,
"text": "A major problem for the National Army was a shortage of experienced officers. At least 20% of its officers had previously served as officers in the British Army, while 50% of the rank-and-file of the National Army had served in the British Army in World War I. Former British Army officers were also recruited for their technical expertise. A number of the senior Free State commanders, such as Emmet Dalton, John T. Prout and W. R. E. Murphy, had seen service as officers in World War I, Dalton and Murphy in the British Army and Prout in the US Army. The Republicans made much use of this fact in their propaganda – claiming that the Free State was only a proxy force for Britain itself. However, the majority of Free State soldiers were raw recruits without military experience, either in World War I or the Irish War of Independence. There were also a significant number of former members of the British Armed Forces on the Republican side, including such senior figures as Tom Barry, David Robinson and Erskine Childers.",
"title": "Course of the war"
},
{
"paragraph_id": 31,
"text": "With Dublin in pro-treaty hands, conflict spread throughout the country. The war started with the anti-treaty forces holding Cork, Limerick and Waterford as part of a self-styled Munster Republic. However, since the anti-treaty side were not equipped to wage conventional war, Lynch was unable to take advantage of the Republicans' initial advantage in numbers and territory held. He hoped simply to hold the Munster Republic long enough to force Britain to renegotiate the treaty.",
"title": "Course of the war"
},
{
"paragraph_id": 32,
"text": "The large towns in Ireland were all relatively easily taken by the Free State in August 1922. Collins, Richard Mulcahy and Eoin O'Duffy planned a nationwide Free State offensive, dispatching columns overland to take Limerick in the west and Waterford in the south-east and seaborne forces to take counties Cork and Kerry in the south and Mayo in the west. In the south, landings occurred at Union Hall in Cork and Fenit, the port of Tralee, in Kerry. Limerick fell on 20 July, Waterford on the same day and Cork city on 10 August after a Free State force landed by sea at Passage West. Another seaborne expedition to Mayo in the west secured government control over that part of the country. While in some places the Republicans had put up determined resistance, nowhere were they able to defeat regular forces armed with artillery and armour. The only real conventional battle during the Free State offensive, the Battle of Killmallock, was fought when Free State troops advanced south from Limerick.",
"title": "Course of the war"
},
{
"paragraph_id": 33,
"text": "Government victories in the major towns inaugurated a period of guerrilla warfare. After the fall of Cork, Lynch ordered IRA units to disperse and form flying columns as they had when fighting the British. They held out in areas such as the western part of counties Cork and Kerry in the south, county Wexford in the east and counties Sligo and Mayo in the west. Sporadic fighting also took place around Dundalk, where Frank Aiken and the Fourth Northern Division of the Irish Republican Army were based, and Dublin, where small-scale but regular attacks were mounted on Free State troops.",
"title": "Course of the war"
},
{
"paragraph_id": 34,
"text": "August and September 1922 saw widespread attacks on Free State forces in the territories that they had occupied in the July–August offensive, inflicting heavy casualties on them. Collins was killed in an ambush by anti-treaty Republicans at Béal na Bláth, near his home in County Cork, in August 1922. Collins' death increased the bitterness of the Free State leadership towards the Republicans and probably contributed to the subsequent descent of the conflict into a cycle of atrocities and reprisals. Arthur Griffith, the Free State president, had also died of a brain haemorrhage ten days before, leaving the government in the hands of W.T. Cosgrave and the Free State army under the command of General Richard Mulcahy. For a brief period, with rising casualties among its troops and its two principal leaders dead, it looked as if the Free State might collapse. However, as winter set in, the Republicans found it increasingly difficult to sustain their campaign, and casualty rates among National Army troops dropped rapidly. For instance, in County Sligo, 54 people died in the conflict, of whom all but eight had been killed by the end of September.",
"title": "Course of the war"
},
{
"paragraph_id": 35,
"text": "In the autumn and winter of 1922, Free State forces broke up many of the larger Republican guerrilla units – in Sligo, Meath and Connemara in the west, for example, and in much of Dublin city. Elsewhere, anti-treaty units were forced by lack of supplies and safe-houses to disperse into smaller groups, typically of nine to ten men. Despite these successes for the National Army, it took eight more months of intermittent warfare before the war was brought to an end.",
"title": "Course of the war"
},
{
"paragraph_id": 36,
"text": "By late 1922 and early 1923, the anti-treaty guerrilla campaign had been reduced largely to acts of sabotage and destruction of public infrastructure such as roads and railways. It was also in this period that the Anti-Treaty IRA began burning the homes of Free State Senators and of many of the Anglo-Irish landed class.",
"title": "Course of the war"
},
{
"paragraph_id": 37,
"text": "In October 1922, de Valera and the anti-treaty Teachtaí Dála (TDs) set up their own \"Republican government\" in opposition to the Free State. However, by then the anti-treaty side held no significant territory and de Valera's government had no authority over the population.",
"title": "Course of the war"
},
{
"paragraph_id": 38,
"text": "On 27 September 1922, three months after the outbreak of war, the Free State's Provisional Government put before the Dáil an Army Emergency Powers Resolution proposing to extend the legislation for setting up military tribunals, transferring some of the Free State's judicial powers over Irish citizens accused of anti-government activities to the Army Council. The legislation, commonly referred to as the \"Public Safety Bill\", set up and empowered military tribunals to impose life imprisonment, as well as the death penalty, for 'aiding or abetting attacks' on state forces, possession of arms and ammunition or explosive 'without the proper authority' and 'looting destruction or arson'.",
"title": "Course of the war"
},
{
"paragraph_id": 39,
"text": "The final phase of the Civil War degenerated into a series of atrocities that left a lasting legacy of bitterness in Irish politics. The Free State began executing Republican prisoners on 17 November 1922, when five IRA men were shot by firing squad. They were followed on 24 November by the execution of acclaimed author and treaty negotiator Erskine Childers. In all, out of around 12,000 Republican prisoners taken in the conflict, 81 were officially executed by the Free State.",
"title": "Course of the war"
},
{
"paragraph_id": 40,
"text": "The Anti-Treaty IRA in reprisal assassinated TD Seán Hales on 7 December 1922. The next day four prominent Republicans held since the first week of the war — Rory O'Connor, Liam Mellows, Richard Barrett and Joe McKelvey — were executed in revenge for the killing of Hales. In addition, Free State troops, particularly in County Kerry, where the guerrilla campaign was most bitter, began the summary execution of captured anti-treaty fighters. The most notorious example of this occurred at Ballyseedy, where nine Republican prisoners were tied to a landmine, which was detonated, killing eight and only leaving one, Stephen Fuller, who was blown clear by the blast, to escape.",
"title": "Course of the war"
},
{
"paragraph_id": 41,
"text": "The number of \"unauthorised\" executions of Republican prisoners during the war has been put as high as 153. Among the Republican reprisals were the assassination of Kevin O'Higgins's father and W. T. Cosgrave's uncle in February 1923.",
"title": "Course of the war"
},
{
"paragraph_id": 42,
"text": "The IRA were unable to maintain an effective guerrilla campaign, given the gradual loss of support. The Catholic Church also supported the Free State, deeming it the lawful government of the country, denouncing the IRA and refusing to administer the Sacraments to anti-treaty fighters. On 10 October 1922, the Catholic Bishops of Ireland issued a formal statement, describing the anti-treaty campaign as:",
"title": "Course of the war"
},
{
"paragraph_id": 43,
"text": "[A] system of murder and assassination of the National forces without any legitimate authority... the guerrilla warfare now being carried on [by] the Irregulars is without moral sanction and therefore the killing of National soldiers is murder before God, the seizing of public and private property is robbery, the breaking of roads, bridges and railways is criminal. All who in contravention of this teaching, participate in such crimes are guilty of grievous sins and may not be absolved in Confession nor admitted to the Holy Communion if they persist in such evil courses.",
"title": "Course of the war"
},
{
"paragraph_id": 44,
"text": "The Church's support for the Free State aroused bitter hostility among some republicans. Although the Catholic Church in independent Ireland has often been seen as a triumphalist Church, a recent study has found that it felt deeply insecure after these events.",
"title": "Course of the war"
},
{
"paragraph_id": 45,
"text": "By early 1923, the offensive capability of the IRA had been seriously eroded and when, in February 1923, the Republican leader Liam Deasy was captured by Free State forces, he called on the republicans to end their campaign and reach an accommodation with the Free State. The State's executions of anti-treaty prisoners, 34 of whom were shot in January 1923, also took its toll on the Republicans' morale.",
"title": "Course of the war"
},
{
"paragraph_id": 46,
"text": "In addition, the National Army's operations in the field were slowly but steadily breaking up the remaining Republican concentrations.",
"title": "Course of the war"
},
{
"paragraph_id": 47,
"text": "March and April 1923 saw this progressive dismemberment of the Republican forces continue with the capture and sometimes killing of guerrilla columns. A National Army report of 11 April stated, \"Events of the last few days point to the beginning of the end as a far as the irregular campaign is concerned\".",
"title": "Course of the war"
},
{
"paragraph_id": 48,
"text": "As the conflict petered out into a de facto victory for the pro-treaty side, de Valera asked the IRA leadership to call a ceasefire, but they refused. The Anti-Treaty IRA executive met on 26 March in County Waterford to discuss the war's future. Tom Barry proposed a motion to end the war, but it was defeated by 6 votes to 5. Éamon de Valera was allowed to attend, after some debate, but was given no voting rights.",
"title": "Course of the war"
},
{
"paragraph_id": 49,
"text": "Lynch, the Republican leader, was killed in a skirmish in the Knockmealdown Mountains in County Tipperary on 10 April. The National Army had extracted information from Republican prisoners in Dublin that the IRA Executive was in the area and as well as killing Lynch, they also captured senior anti-treaty IRA officers Dan Breen, Todd Andrews, Seán Gaynor and Frank Barrett in the operation.",
"title": "Course of the war"
},
{
"paragraph_id": 50,
"text": "It is often suggested by historians including Professor Michael Laffan of University College Dublin, that the death of Lynch allowed the more pragmatic Frank Aiken, who took over as IRA Chief of Staff, to call a halt to what seemed a futile struggle. Aiken's accession to IRA leadership was followed on 30 April by the declaration of a suspension of military activities; on 24 May 1923, he issued a ceasefire order to IRA volunteers. They were to dump arms rather than surrender them or continue a fight that they were incapable of winning.",
"title": "Course of the war"
},
{
"paragraph_id": 51,
"text": "Éamon de Valera supported the order, issuing a statement to Anti-Treaty fighters on 24 May:",
"title": "Aftermath of the ceasefire"
},
{
"paragraph_id": 52,
"text": "Soldiers of the Republic. Legion of the Rearguard: The Republic can no longer be defended successfully by your arms. Further sacrifice of life would now be in vain and the continuance of the struggle in arms unwise in the national interest and prejudicial to the future of our cause. Military victory must be allowed to rest for the moment with those who have destroyed the Republic.",
"title": "Aftermath of the ceasefire"
},
{
"paragraph_id": 53,
"text": "The Free State government had started peace negotiations in early May, which broke down. The High Court of Justice in Ireland ruled on 31 July 1923 that a state of war no longer existed, and consequently the internment of Republicans, permitted under common law only in wartime, was now illegal. Without a formal peace, holding 13,000 prisoners and worried that fighting could break out again at any time, the government enacted two Public Safety (Emergency Powers) Acts on 1 and 3 August 1923, to permit continued internment and other measures. Thousands of Anti-Treaty IRA members (including de Valera on 15 August) were arrested by the Free State forces in the weeks and months after the end of the war, when they had dumped their arms and returned home.",
"title": "Aftermath of the ceasefire"
},
{
"paragraph_id": 54,
"text": "A general election was held on 27 August 1923, which Cumann na nGaedheal, the pro-Free State party, won with about 40% of the first-preference vote. The Republicans, represented by Sinn Féin, won about 27% of the vote. Many of their candidates and supporters were still imprisoned before, during and after the election.",
"title": "Aftermath of the ceasefire"
},
{
"paragraph_id": 55,
"text": "In October 1923, around 8,000 of the 12,000 Republican prisoners in Free State gaols went on a hunger strike. The strike lasted for 41 days and met little success (among those who died were Denny Barry, Joseph Whitty and Andy O'Sullivan) see: 1923 Irish Hunger Strikes. However, most of the women prisoners were released shortly thereafter and the hunger strike helped concentrate the Republican movement on the prisoners and their associated organisations. In July, de Valera had recognised the Republican political interests lay with the prisoners and went so far as to say:",
"title": "Aftermath of the ceasefire"
},
{
"paragraph_id": 56,
"text": "The whole future of our cause and of the nation depends in my opinion upon the spirit of the prisoners in the camps and in the jails. You are the repositories of the NATIONAL FAITH AND WILL",
"title": "Aftermath of the ceasefire"
},
{
"paragraph_id": 57,
"text": "Although the cause of the Civil War was the Treaty, as the war developed the anti-treaty forces sought to identify their actions with the traditional Republican cause of the \"men of no property\" and the result was that large Anglo-Irish landowners and some less well-off Southern Unionists were attacked. A total of 192 \"stately homes\" of the old landed class and of Free State politicians were destroyed by anti-treaty forces during the war.",
"title": "Attacks on former Unionists"
},
{
"paragraph_id": 58,
"text": "The stated reason for such attacks was that some landowners had become Free State senators. In October 1922, a deputation of Southern Unionists met W. T. Cosgrave to offer their support to the Free State and some of them had received positions in the State's Upper house or Senate. Among the prominent senators whose homes were attacked were: Palmerstown House near Naas, which belonged to the Earl of Mayo, Moore Hall in Mayo, Horace Plunkett (who had helped to establish the rural co-operative schemes), and Senator Henry Guinness (which was unsuccessful). Also burned was Marlfield House in Clonmel, the home of Senator John Philip Bagwell, with its extensive library of historical documents. Bagwell was kidnapped and held in the Dublin Mountains, but later released when reprisals were threatened.",
"title": "Attacks on former Unionists"
},
{
"paragraph_id": 59,
"text": "However, in addition to their allegiance to the Free State, there were also other factors behind Republican animosity towards the old landed class. Many, but not all of these people, had supported the Crown forces during the War of Independence. This support was often largely moral, but sometimes it took the form of actively assisting the British in the conflict. Such attacks should have ended with the Truce of 11 July 1921, but they continued after the truce and escalated during the Civil War. In July 1922, Con Moloney, the IRA Adjutant General, ordered that unionist property should be seized to accommodate their men. The \"worst spell\" of attacks on former unionist property came in the early months of 1923, 37 \"big houses\" being burnt in January and February alone.",
"title": "Attacks on former Unionists"
},
{
"paragraph_id": 60,
"text": "Though the Land Purchase (Ireland) Act 1903 allowed tenants to buy land from their landlords, some small farmers, particularly in Mayo and Galway, simply occupied land belonging to political opponents during this period when the RIC had ceased to function. In 1919, senior Sinn Féin officials were sufficiently concerned at this unilateral action that they instituted Arbitration Courts to adjudicate disputes. Sometimes these attacks had sectarian overtones, although most IRA men made no distinction between Catholic and Protestant supporters of the Irish government.",
"title": "Attacks on former Unionists"
},
{
"paragraph_id": 61,
"text": "The IRA burnt an orphanage housing Protestant boys near Clifden, County Galway in June 1922, on the ground that it was \"pro-British\". The 60 orphans were taken to Devonport on board a Royal Navy destroyer.",
"title": "Attacks on former Unionists"
},
{
"paragraph_id": 62,
"text": "Controversy continues to this day about the extent of intimidation of Protestants at this time. Many left Ireland during and after the Civil War. Dr Andy Bielenberg of UCC considers that about 41,000 who were not linked to the former British administration left Southern Ireland (which became the Irish Free State) between 1919 and 1923. He has found that a \"high-water mark\" of this 41,000 left between 1921 and 1923. In all, from 1911 to 1926, the Protestant population of the 26 counties fell from some 10.4% of the total population to 7.4%.",
"title": "Attacks on former Unionists"
},
{
"paragraph_id": 63,
"text": "The Civil War attracted international attention which led to various groups expressing support and opposition to the anti-treaty side. The Communist Party of Great Britain in its journal The Communist wrote \"The proletarians of the IRA have the future of Ireland in their hands. If the Irish Labour Party would only dare! A mass movement of the Irish workers in alliance with the IRA could establish a Workers' Republic now\". They were also supported by the Communist International (Comintern) which on 3 January 1923 passed a resolution stating it \"sends fraternal greetings to the struggling Irish national revolutionaries and feels assured that they will soon tread the only path that leads to real freedom – the path of Communism. The CI will assist all efforts to organise the struggle to combat this terror and to help the Irish workers and peasants to victory.\"",
"title": "Foreign support"
},
{
"paragraph_id": 64,
"text": "The majority of Irish-Americans supported the treaty, including those in Clann na Gael and Friends of Irish Freedom. However anti-treaty republicans had control of what was left of Clann na Gael and the American Association for the Recognition of the Irish Republic so they supported the anti-treaty side during the war.",
"title": "Foreign support"
},
{
"paragraph_id": 65,
"text": "The Civil War, though short, was bloody. It cost the lives of many public figures, including Michael Collins, Cathal Brugha, Arthur Griffith and Liam Lynch. Both sides carried out brutal acts: the anti-treaty forces killed a TD and several other pro-Treaty politicians and burned many homes of senators and Free State supporters, while the government executed anti-treaty prisoners, officially and unofficially.",
"title": "Consequences"
},
{
"paragraph_id": 66,
"text": "Precise figures for the dead and wounded have yet to be calculated. The pro-treaty forces suffered between 800 and 1000 fatalities from all causes. It has been suggested that the anti-treaty forces' death toll was higher. but the Republican roll of honour, compiled in the 1920s lists 426 anti-Treaty IRA Volunteers killed between January 1922 and April 1924. The most recent county-by-county research suggests a death toll of just under 2,000. For total combatant and civilian deaths, a minimum of 1,500 and a maximum of 4,000 have been suggested, though the latter figure is now generally estimated to be too high.",
"title": "Consequences"
},
{
"paragraph_id": 67,
"text": "The Garda Síochána (new police force) was not involved in the war, which meant that it was well placed to develop into an unarmed and politically neutral police service after the war. It had been disarmed by the Government in order to win public confidence in June–September 1922 and in December 1922, the IRA issued a General Order not to fire on the Civil Guard. The Criminal Investigation Department, or CID, a 350-strong, armed, plain-clothed Police Corps that had been established during the conflict for the purposes of counter-insurgency, was disbanded in October 1923, shortly after the conflict's end.",
"title": "Consequences"
},
{
"paragraph_id": 68,
"text": "The economic costs of the war were also high. As their forces abandoned their fixed positions in July–August 1922, the Republicans burned many of the administrative buildings and businesses that they had been occupying. In addition, their subsequent guerrilla campaign caused much destruction, and the economy of the Free State suffered a hard blow in the earliest days of its existence, as a result. The material damage caused by the war to property in the Free State has been estimated to be in the region of £50 million in 1922. This is equivalent to about £2.1 billion, or €2.4 billion worth of damage in 2022 values.",
"title": "Consequences"
},
{
"paragraph_id": 69,
"text": "Particularly damaging to the Free State's economy was the systematic destruction of railway infrastructure and roads by the Republicans. In addition, the cost to the Free State of waging the war came to another £17 million (£718m or €883m in 2022 values). By September 1923, Deputy Hogan estimated the cost at £50 million. The new State ended 1923 with a budget deficit of over £4 million (£168m or €196m in 2022 values). This weakened financial situation meant that the new state could not pay its share of Imperial debt under the treaty. This adversely affected the boundary negotiations in 1924–25, in which the Free State government acquiesced that border with Northern Ireland would remain unchanged in exchange for forgiveness of the Imperial debt. Further, the state undertook to pay for damage caused to property between the truce of July 1921 and the end of the Civil War; W. T. Cosgrave told the Dáil:",
"title": "Consequences"
},
{
"paragraph_id": 70,
"text": "Every Deputy in this House is aware of the complaint which has been made that the measure of compensation for post-Truce damage compares unfavourably with the awards for damage suffered pre-Truce.",
"title": "Consequences"
},
{
"paragraph_id": 71,
"text": "The fact that the Irish Civil War was fought between Irish Nationalist factions meant that the sporadic conflict in Northern Ireland ended. Collins and Sir James Craig signed an agreement to end it on 30 March 1922, but, despite this, Collins covertly supplied arms to the Northern IRA until a week before his death in August 1922. Because of the Irish Civil War, Northern Ireland was able to consolidate its existence and the partition of Ireland was confirmed for the foreseeable future. The continuing war also confirmed the northern Unionists' existing stance against the ethos of all shades of nationalism. This might have led to open hostilities between North and South had the Irish Civil War not broken out. Indeed, the Ulster Special Constabulary (the \"B-Specials\") that had been established in 1920 (on the foundation of Northern Ireland) was expanded in 1922 rather than being demobilised.",
"title": "Consequences"
},
{
"paragraph_id": 72,
"text": "In the event, it was only well after their defeat in the Civil War that anti-treaty Irish Republicans seriously considered whether to take armed action against British rule in Northern Ireland (the first serious suggestion to do this came in the late 1930s). The northern units of the IRA largely supported the Free State side in the Civil War because of Collins's policies, and over 500 of them joined the new Free State's National Army.",
"title": "Consequences"
},
{
"paragraph_id": 73,
"text": "The cost of the war and the budget deficit it caused was a difficulty for the new Free State and affected the Boundary Commission negotiations of 1925, which were to determine the border with Northern Ireland. The Free State agreed to waive its claim to predominantly Nationalist areas in Northern Ireland and in return its agreed share of the Imperial debt under the 1921 Treaty was not paid.",
"title": "Consequences"
},
{
"paragraph_id": 74,
"text": "In 1926, having failed to persuade the majority of the Anti-Treaty IRA or the anti-treaty party of Sinn Féin to accept the new status quo as a basis for an evolving Republic, a large faction led by de Valera and Aiken left to resume constitutional politics and to found the Fianna Fáil party. Whereas Fianna Fáil was to become the dominant party in Irish politics, Sinn Féin became a small, isolated political party. The IRA, then much more numerous and influential than Sinn Féin, remained associated with Fianna Fáil (though not directly) until banned by de Valera in 1935.",
"title": "Consequences"
},
{
"paragraph_id": 75,
"text": "In 1927, Fianna Fáil members took the Oath of Allegiance and entered the Dáil, effectively recognising the legitimacy of the Free State. The Free State was already moving towards independence by this point. Under the Statute of Westminster 1931, the British Parliament gave up its right to legislate for members of the British Commonwealth. When elected to power in 1932, Fianna Fáil under de Valera set about dismantling what they considered to be objectionable features of the treaty, abolishing the Oath of Allegiance, removing the power of the Office of Governor General (British representative in Ireland) and abolishing the Senate, which was dominated by former Unionists and pro-treaty Nationalists. In 1937, they passed a new constitution, which made a President the head of state, did not mention any allegiance to the British monarch, and which included a territorial claim to Northern Ireland. The following year, Britain returned without conditions the seaports that it had kept under the terms of the treaty. When the Second World War broke out in 1939, the state was able to demonstrate its independence by remaining neutral throughout the war, although Dublin did to some extent tacitly support the Allies. Finally, in 1948, a coalition government, containing elements of both sides in the Civil War (pro-treaty Fine Gael and anti-treaty Clann na Poblachta) left the British Commonwealth and described the state as the Republic of Ireland. By the 1950s, the issues over which the Civil War had been fought were largely settled.",
"title": "Consequences"
},
{
"paragraph_id": 76,
"text": "As with most civil wars, the internecine conflict left a bitter legacy, which continues to influence Irish politics to this day. The two largest political parties in the republic through most of its history (except for the 2011 and 2020 general elections) were Fianna Fáil and Fine Gael, the descendants respectively of the anti-treaty and pro-treaty forces of 1922. Until the 1970s, almost all of Ireland's prominent politicians were veterans of the Civil War, a fact which poisoned the relationship between Ireland's two biggest parties. Examples of Civil War veterans include: Republicans Éamon de Valera, Frank Aiken, Todd Andrews and Seán Lemass; and Free State supporters W. T. Cosgrave, Richard Mulcahy and Kevin O'Higgins.",
"title": "Legacy and memory"
},
{
"paragraph_id": 77,
"text": "Moreover, many of these men's sons and daughters also became politicians, meaning that the personal wounds of the civil war were felt over three generations. In the 1930s, after Fianna Fáil took power for the first time, it looked possible for a while that the Civil War might break out again between the IRA and the pro-Free State Blueshirts. Fortunately, this crisis was averted, and by the 1950s violence was no longer prominent in politics in the Republic of Ireland. However, the breakaway IRA continued (and continues in various forms) to exist. It was not until 1948 that the IRA renounced military attacks on the forces of the southern Irish state when it became the Republic of Ireland. After this point, the organisation dedicated itself primarily to the end of British rule in Northern Ireland. The IRA Army Council still makes claim to be the legitimate Provisional Government of the Irish Republic declared in 1916 and annulled by the Anglo-Irish Treaty of 1921.",
"title": "Legacy and memory"
},
{
"paragraph_id": 78,
"text": "According to Edward Quinn, the play \"Juno and the Paycock\" by Seán O'Casey is a tragicomedy that criticizes the civil war and the foolishness that led to it. Irish writer James Stephens says the play's theme is an \"orchestrated hymn against all poverty and hate.\"",
"title": "Legacy and memory"
}
]
| The Irish Civil War was a conflict that followed the Irish War of Independence and accompanied the establishment of the Irish Free State, an entity independent from the United Kingdom but within the British Empire. The civil war was waged between the Provisional Government of Ireland and the anti-Treaty Irish Republican Army (1922–1969) (IRA) over the Anglo-Irish Treaty. The Provisional Government supported the terms of the treaty, while the anti-Treaty opposition saw it as a betrayal of the Irish Republic that had been proclaimed during the Easter Rising of 1916. Many of the combatants had fought together against the British in the Irish Republican Army (1919–1922) during the War of Independence, and had divided after that conflict ended and the treaty negotiations began. The Civil War was won by the pro-treaty National Army, who first secured Dublin by early July, then went on the offensive against the anti-Treaty strongholds of the south and west, especially the 'Munster Republic', successfully capturing all urban centres by late August. The guerrilla phase of the Irish Civil War lasted another 10 months, before the IRA leadership issued a "dump arms" order to all units, effectively ending the conflict. The National Army benefited from substantial quantities of weapons provided by the British government, particularly artillery and armoured cars. The conflict left Irish society divided and embittered for generations. Today, the three largest political parties in the Republic of Ireland, Fine Gael, Fianna Fáil, and Sinn Féin are direct descendants of the opposing sides of the war; Fine Gael from the supporters of the pro-Treaty side, Fianna Fáil the party formed from the bulk of the anti-Treaty side by Éamon de Valera, and Sinn Féin, descended from the rump anti-Treaty and irredentist republican party left behind by De Valera's supporters. | 2001-11-01T21:46:28Z | 2023-12-31T05:23:31Z | [
"Template:Lang-ga",
"Template:Sfn",
"Template:Cite web",
"Template:Pb",
"Template:ATIRA",
"Template:Ireland topics",
"Template:Use dmy dates",
"Template:Blockquote",
"Template:Main",
"Template:Dead link",
"Template:Short description",
"Template:Infobox military conflict",
"Template:Distinguish",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Use Hiberno-English",
"Template:Disputed inline",
"Template:ISBN",
"Template:Commons category",
"Template:Authority control",
"Template:Cite thesis",
"Template:Spaced ndash",
"Template:See also",
"Template:Page needed",
"Template:Notelist",
"Template:Wikiquote"
]
| https://en.wikipedia.org/wiki/Irish_Civil_War |
15,215 | Internet Explorer | Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated as IE or MSIE) is a retired series of graphical web browsers developed by Microsoft that were used in the Windows line of operating systems. While IE has been discontinued on most Windows editions, it remains supported on certain editions of Windows, such as Windows 10 LTSB/LTSC. Starting in 1995, it was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads or in-service packs and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. New feature development for the browser was discontinued in 2016 and ended support on June 15, 2022 for Windows 10 Semi-Annual Channel (SAC), in favor of its successor, Microsoft Edge.
Internet Explorer was once the most widely used web browser, attaining a peak of 95% usage share by 2003. It has since fallen out of general use after retirement. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launches of Firefox (2004) and Google Chrome (2008) and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019. Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7.
The browser has been scrutinized throughout its development for its use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have determined that the integration of Internet Explorer with Windows has been to the detriment of fair browser competition.
Internet Explorer 7 was supported on Windows Embedded Compact 2013 until October 10, 2023. The core of Internet Explorer 11 will continue being shipped and supported until at least 2029 as IE Mode, a feature of Microsoft Edge, enabling Edge to display web pages using Internet Explorer 11's Trident layout engine and other components. Through IE Mode, the underlying technology of Internet Explorer 11 partially exists on versions of Windows that do not support IE11 as a proper application, including newer versions of Windows 10, as well as Windows 11, Windows Server Insider Build 22463 and Windows Server Insider Build 25110.
The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to former project lead Ben Slivka, used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly.
The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in the Microsoft Plus! pack for Windows 95. The Internet Explorer team began with about six people in early development. Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997.
Microsoft was sued by SyNet Inc. in 1996, for trademark infringement, claiming it owned the rights to the name "Internet Explorer." It ended with Microsoft paying $5 million to settle the lawsuit.
Internet Explorer 2 is the second major version of Internet Explorer, released on November 22, 1995, for Windows 95 and Windows NT, and on April 23, 1996, for Apple Macintosh and Windows 3.1.
Internet Explorer 3 is the third major version of Internet Explorer, released on August 13, 1996, for Microsoft Windows and on January 8, 1997, for Apple Mac OS.
Internet Explorer 4 is the fourth major version of Internet Explorer, released in September 1997 for Microsoft Windows, Mac OS, Solaris, and HP-UX. It was the first version of Internet Explorer to use the Trident web engine.
Internet Explorer 5 is the fifth major version of Internet Explorer, released on March 18, 1999, for Windows 3.1, Windows NT 3, Windows 95, Windows NT 4.0 SP3, Windows 98, Mac OS X (up to v5.2.3), Classic Mac OS (up to v5.1.7), Solaris and HP-UX (up to 5.01 SP1).
Internet Explorer 6 is the sixth major version of Internet Explorer, released on August 24, 2001, for Windows NT 4.0 SP6a, Windows 98, Windows 2000, Windows ME and as the default web browser for Windows XP and Windows Server 2003.
Internet Explorer 7 is the seventh major version of Internet Explorer, released on October 18, 2006, for Windows XP SP2, Windows Server 2003 SP1 and as the default web browser for Windows Vista, Windows Server 2008 and Windows Embedded POSReady 2009. IE7 introduces tabbed browsing.
Internet Explorer 8 is the eighth major version of Internet Explorer, released on March 19, 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and as the default web browser for Windows 7 (later default was Internet Explorer 11) and Windows Server 2008 R2.
Internet Explorer 9 is the ninth major version of Internet Explorer, released on March 14, 2011, for Windows 7, Windows Server 2008 R2, Windows Vista Service Pack 2 and Windows Server 2008 SP2 with the Platform Update.
Internet Explorer 10 is the tenth major version of Internet Explorer, released on October 26, 2012, and is the default web browser for Windows 8 and Windows Server 2012. It became available for Windows 7 SP1 and Windows Server 2008 R2 SP1 in February 2013.
Internet Explorer 11 is featured in Windows 8.1, Windows Server 2012 R2 and Windows RT 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions.
Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks.
Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying browser engine) instead of "MSIE." It also announces compatibility with Gecko (the browser engine of Firefox).
Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013.
Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard, the only still supported edition of Windows 8 in April 2019.
Microsoft Edge was officially unveiled on January 21, 2015 as "Project Spartan." On April 29, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser in Windows 10. However, Internet Explorer remained the default web browser on the Windows 10 Long Term Servicing Channel (LTSC) and on Windows Server until 2021, primarily for enterprise purposes.
Internet Explorer is still installed in Windows 10 to maintain compatibility with older websites and intranet sites that require ActiveX and other legacy web technologies. The browser's MSHTML rendering engine also remains for compatibility reasons.
Additionally, Microsoft Edge shipped with the "Internet Explorer mode" feature, which enables support for legacy internet applications. This is possible through use of the Trident MSHTML engine, the rendering code of Internet Explorer. Microsoft has committed to supporting Internet Explorer mode at least through 2029, with a one-year notice before it is discontinued.
With the release of Microsoft Edge, the development of new features for Internet Explorer ceased. Internet Explorer 11 was the final release, and Microsoft began the process of deprecating Internet Explorer. During this process, it will still be maintained as part of Microsoft's support policies.
Since January 12, 2016, only the latest version of Internet Explorer available for each version of Windows has been supported. At the time, nearly half of Internet Explorer users were using an unsupported version.
In February 2019, Microsoft Chief of Security Chris Jackson recommended that users stop using Internet Explorer as their default browser.
Various websites have dropped support for Internet Explorer. On June 1, 2020, the Internet Archive removed Internet Explorer from its list of supported browsers, due to the browser's dated nature. Since November 30, 2020, the web version of Microsoft Teams can no longer be accessed using Internet Explorer 11, followed by the remaining Microsoft 365 applications since August 17, 2021. WordPress also dropped support for the browser in July 2021.
Microsoft disabled the normal means of launching Internet Explorer in Windows 11 and later versions of Windows 10, but it is still possible for users to launch the browser from the Control Panel's browser toolbar settings or via PowerShell.
On June 15, 2022, Internet Explorer 11 support ended for the Windows 10 Semi-Annual Channel (SAC). Users on these versions of Windows 10 were redirected to Microsoft Edge starting on February 14, 2023, and visual references to the browser (such as icons on the taskbar) would have been removed on June 13, 2023. However, on May 19, 2023 various organizations disapproved, leading Microsoft to withdraw the change. Other versions of Windows that were still supported at the time were unaffected. Specifically, Windows 7 ESU, Windows 8.x, Windows RT; Windows Server 2008/R2 ESU, Windows Server 2012/R2 and later; and Windows 10 LTSB/LTSC continued to receive updates until their respective end of life dates.
On other versions of Windows, Internet Explorer will still be supported until their own end of support dates. IE7 was supported until October 10, 2023 alongside the end of support for Windows Embedded Compact 2013, while IE9 will be supported until January 9, 2024 alongside the end of ESU support for Azure customers on Windows Server 2008. Barring additional changes to the support policy, Internet Explorer 11 will be supported until January 13, 2032, concurrent with the end of support for Windows 10 IoT Enterprise LTSC 2021.
Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the height of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time.
Internet Explorer, using the MSHTML (Trident) browser engine:
Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a "quirks mode" in which it deliberately mimics nonstandard behaviors of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript.
Internet Explorer was criticized by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C.
Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in several web pages that appear broken in standards-compliant web browsers and has introduced the need for a "quirks mode" to allow for rendering improper elements meant for Internet Explorer in these other browsers.
Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers.
These include the inner HTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility, the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML.
Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC), HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9.
Other non-standard behaviors include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode, as well as support for embedding EOT fonts in web pages.
Support for favicons was first added in Internet Explorer 5. Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files.
Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to Windows Explorer. Internet Explorer 5 and 6 had a side bar for web searches, enabling jumps through pages from results listed in the side bar. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar.
Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc.
Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes.
Caching has been improved in IE9.
Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behavior and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication.
Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, iexplore.exe:
Internet Explorer does not include any native scripting functionality. Rather, MSHTML.dll exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting.
Internet Explorer 8 introduced some major architectural changes, called loosely coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level, each tab process can host multiple web sites. The processes use asynchronous inter-process communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with protected mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by protected mode.
Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser. Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats. It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY. In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples. Add-ons can be installed either locally, or directly by a web site.
Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a "No Add-Ons" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all. In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer.
Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells.
Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions.
Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing the accidental installation of malware.
Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware-infected.
In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited.
Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems that are in Microsoft's mainstream support phase.
On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which "could allow outside users to take control of a person's computer and steal their passwords.” Microsoft representatives countered this recommendation, claiming that "0.02% of internet sites" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update.
In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised "temporary use of alternative browsers" because of a "critical security hole" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs.
In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox.
A 2017 browser security white paper comparing Google Chrome, Microsoft Edge, and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies.
Internet Explorer has been subjected to many security vulnerabilities and concerns such that the volume of criticism for IE is unusually high. Much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page to install themselves. This is known as a "drive-by install.” There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert.
A number of security flaws affecting IE originated not in the browser itself, but in ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX had been overstated and there were safeguards in place. In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components. Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities.
In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available. The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year.
According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days. Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all.
In December 2010, researchers were able to bypass the "Protected Mode" feature in Internet Explorer.
In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 from on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008, and Server 2003, IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2).
The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer. The Australian and French Government issued a similar warning a few days later.
On April 26, 2014, Microsoft issued a security advisory relating to CVE-2014-1776 (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11), a vulnerability that could allow "remote code execution" in Internet Explorer versions 6 to 11. On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in "the complete compromise" of an affected system. US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed. The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date. Symantec, a cyber security firm, confirmed that "the vulnerability crashes Internet Explorer on Windows XP." The vulnerability was resolved on May 1, 2014, with a security update.
The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser.
Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape.
Firefox 1.0 had surpassed Internet Explorer 5 in early 2005, with Firefox 1.0 at 8 percent market share.
Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference.
According to StatCounter, Internet Explorer's market share fell below 50% in September 2010. In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter.
Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications.
While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one.
The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports.
The popularity of Internet Explorer led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself "Internet Explorer – Emergency Mode" appeared. It closely resembled the real Internet Explorer but had fewer buttons and no search bar. If a user attempted to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari, or the real Internet Explorer, this browser would be loaded instead. It also displayed a fake error message, claiming that the computer was infected with malware and Internet Explorer had entered "Emergency Mode.” It blocked access to legitimate sites such as Google if the user tried to access them. | [
{
"paragraph_id": 0,
"text": "Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, commonly abbreviated as IE or MSIE) is a retired series of graphical web browsers developed by Microsoft that were used in the Windows line of operating systems. While IE has been discontinued on most Windows editions, it remains supported on certain editions of Windows, such as Windows 10 LTSB/LTSC. Starting in 1995, it was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads or in-service packs and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. New feature development for the browser was discontinued in 2016 and ended support on June 15, 2022 for Windows 10 Semi-Annual Channel (SAC), in favor of its successor, Microsoft Edge.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Internet Explorer was once the most widely used web browser, attaining a peak of 95% usage share by 2003. It has since fallen out of general use after retirement. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launches of Firefox (2004) and Google Chrome (2008) and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019. Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The browser has been scrutinized throughout its development for its use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have determined that the integration of Internet Explorer with Windows has been to the detriment of fair browser competition.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Internet Explorer 7 was supported on Windows Embedded Compact 2013 until October 10, 2023. The core of Internet Explorer 11 will continue being shipped and supported until at least 2029 as IE Mode, a feature of Microsoft Edge, enabling Edge to display web pages using Internet Explorer 11's Trident layout engine and other components. Through IE Mode, the underlying technology of Internet Explorer 11 partially exists on versions of Windows that do not support IE11 as a proper application, including newer versions of Windows 10, as well as Windows 11, Windows Server Insider Build 22463 and Windows Server Insider Build 25110.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to former project lead Ben Slivka, used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in the Microsoft Plus! pack for Windows 95. The Internet Explorer team began with about six people in early development. Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Microsoft was sued by SyNet Inc. in 1996, for trademark infringement, claiming it owned the rights to the name \"Internet Explorer.\" It ended with Microsoft paying $5 million to settle the lawsuit.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Internet Explorer 2 is the second major version of Internet Explorer, released on November 22, 1995, for Windows 95 and Windows NT, and on April 23, 1996, for Apple Macintosh and Windows 3.1.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Internet Explorer 3 is the third major version of Internet Explorer, released on August 13, 1996, for Microsoft Windows and on January 8, 1997, for Apple Mac OS.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Internet Explorer 4 is the fourth major version of Internet Explorer, released in September 1997 for Microsoft Windows, Mac OS, Solaris, and HP-UX. It was the first version of Internet Explorer to use the Trident web engine.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Internet Explorer 5 is the fifth major version of Internet Explorer, released on March 18, 1999, for Windows 3.1, Windows NT 3, Windows 95, Windows NT 4.0 SP3, Windows 98, Mac OS X (up to v5.2.3), Classic Mac OS (up to v5.1.7), Solaris and HP-UX (up to 5.01 SP1).",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Internet Explorer 6 is the sixth major version of Internet Explorer, released on August 24, 2001, for Windows NT 4.0 SP6a, Windows 98, Windows 2000, Windows ME and as the default web browser for Windows XP and Windows Server 2003.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Internet Explorer 7 is the seventh major version of Internet Explorer, released on October 18, 2006, for Windows XP SP2, Windows Server 2003 SP1 and as the default web browser for Windows Vista, Windows Server 2008 and Windows Embedded POSReady 2009. IE7 introduces tabbed browsing.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Internet Explorer 8 is the eighth major version of Internet Explorer, released on March 19, 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and as the default web browser for Windows 7 (later default was Internet Explorer 11) and Windows Server 2008 R2.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Internet Explorer 9 is the ninth major version of Internet Explorer, released on March 14, 2011, for Windows 7, Windows Server 2008 R2, Windows Vista Service Pack 2 and Windows Server 2008 SP2 with the Platform Update.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Internet Explorer 10 is the tenth major version of Internet Explorer, released on October 26, 2012, and is the default web browser for Windows 8 and Windows Server 2012. It became available for Windows 7 SP1 and Windows Server 2008 R2 SP1 in February 2013.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Internet Explorer 11 is featured in Windows 8.1, Windows Server 2012 R2 and Windows RT 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Internet Explorer 11's user agent string now identifies the agent as \"Trident\" (the underlying browser engine) instead of \"MSIE.\" It also announces compatibility with Gecko (the browser engine of Firefox).",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard, the only still supported edition of Windows 8 in April 2019.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Microsoft Edge was officially unveiled on January 21, 2015 as \"Project Spartan.\" On April 29, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser in Windows 10. However, Internet Explorer remained the default web browser on the Windows 10 Long Term Servicing Channel (LTSC) and on Windows Server until 2021, primarily for enterprise purposes.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Internet Explorer is still installed in Windows 10 to maintain compatibility with older websites and intranet sites that require ActiveX and other legacy web technologies. The browser's MSHTML rendering engine also remains for compatibility reasons.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Additionally, Microsoft Edge shipped with the \"Internet Explorer mode\" feature, which enables support for legacy internet applications. This is possible through use of the Trident MSHTML engine, the rendering code of Internet Explorer. Microsoft has committed to supporting Internet Explorer mode at least through 2029, with a one-year notice before it is discontinued.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "With the release of Microsoft Edge, the development of new features for Internet Explorer ceased. Internet Explorer 11 was the final release, and Microsoft began the process of deprecating Internet Explorer. During this process, it will still be maintained as part of Microsoft's support policies.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Since January 12, 2016, only the latest version of Internet Explorer available for each version of Windows has been supported. At the time, nearly half of Internet Explorer users were using an unsupported version.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In February 2019, Microsoft Chief of Security Chris Jackson recommended that users stop using Internet Explorer as their default browser.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Various websites have dropped support for Internet Explorer. On June 1, 2020, the Internet Archive removed Internet Explorer from its list of supported browsers, due to the browser's dated nature. Since November 30, 2020, the web version of Microsoft Teams can no longer be accessed using Internet Explorer 11, followed by the remaining Microsoft 365 applications since August 17, 2021. WordPress also dropped support for the browser in July 2021.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Microsoft disabled the normal means of launching Internet Explorer in Windows 11 and later versions of Windows 10, but it is still possible for users to launch the browser from the Control Panel's browser toolbar settings or via PowerShell.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "On June 15, 2022, Internet Explorer 11 support ended for the Windows 10 Semi-Annual Channel (SAC). Users on these versions of Windows 10 were redirected to Microsoft Edge starting on February 14, 2023, and visual references to the browser (such as icons on the taskbar) would have been removed on June 13, 2023. However, on May 19, 2023 various organizations disapproved, leading Microsoft to withdraw the change. Other versions of Windows that were still supported at the time were unaffected. Specifically, Windows 7 ESU, Windows 8.x, Windows RT; Windows Server 2008/R2 ESU, Windows Server 2012/R2 and later; and Windows 10 LTSB/LTSC continued to receive updates until their respective end of life dates.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "On other versions of Windows, Internet Explorer will still be supported until their own end of support dates. IE7 was supported until October 10, 2023 alongside the end of support for Windows Embedded Compact 2013, while IE9 will be supported until January 9, 2024 alongside the end of ESU support for Azure customers on Windows Server 2008. Barring additional changes to the support policy, Internet Explorer 11 will be supported until January 13, 2032, concurrent with the end of support for Windows 10 IoT Enterprise LTSC 2021.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the height of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time.",
"title": "Features"
},
{
"paragraph_id": 32,
"text": "Internet Explorer, using the MSHTML (Trident) browser engine:",
"title": "Features"
},
{
"paragraph_id": 33,
"text": "Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a \"quirks mode\" in which it deliberately mimics nonstandard behaviors of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript.",
"title": "Features"
},
{
"paragraph_id": 34,
"text": "Internet Explorer was criticized by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C.",
"title": "Features"
},
{
"paragraph_id": 35,
"text": "Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in several web pages that appear broken in standards-compliant web browsers and has introduced the need for a \"quirks mode\" to allow for rendering improper elements meant for Internet Explorer in these other browsers.",
"title": "Features"
},
{
"paragraph_id": 36,
"text": "Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers.",
"title": "Features"
},
{
"paragraph_id": 37,
"text": "These include the inner HTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility, the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML.",
"title": "Features"
},
{
"paragraph_id": 38,
"text": "Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC), HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9.",
"title": "Features"
},
{
"paragraph_id": 39,
"text": "Other non-standard behaviors include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode, as well as support for embedding EOT fonts in web pages.",
"title": "Features"
},
{
"paragraph_id": 40,
"text": "Support for favicons was first added in Internet Explorer 5. Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files.",
"title": "Features"
},
{
"paragraph_id": 41,
"text": "Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to Windows Explorer. Internet Explorer 5 and 6 had a side bar for web searches, enabling jumps through pages from results listed in the side bar. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar.",
"title": "Features"
},
{
"paragraph_id": 42,
"text": "Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc.",
"title": "Features"
},
{
"paragraph_id": 43,
"text": "Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes.",
"title": "Features"
},
{
"paragraph_id": 44,
"text": "Caching has been improved in IE9.",
"title": "Features"
},
{
"paragraph_id": 45,
"text": "Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behavior and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication.",
"title": "Features"
},
{
"paragraph_id": 46,
"text": "Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, iexplore.exe:",
"title": "Architecture"
},
{
"paragraph_id": 47,
"text": "Internet Explorer does not include any native scripting functionality. Rather, MSHTML.dll exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting.",
"title": "Architecture"
},
{
"paragraph_id": 48,
"text": "Internet Explorer 8 introduced some major architectural changes, called loosely coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level, each tab process can host multiple web sites. The processes use asynchronous inter-process communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with protected mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by protected mode.",
"title": "Architecture"
},
{
"paragraph_id": 49,
"text": "Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser. Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats. It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY. In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples. Add-ons can be installed either locally, or directly by a web site.",
"title": "Extensibility"
},
{
"paragraph_id": 50,
"text": "Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a \"No Add-Ons\" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all. In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer.",
"title": "Extensibility"
},
{
"paragraph_id": 51,
"text": "Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells.",
"title": "Extensibility"
},
{
"paragraph_id": 52,
"text": "Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions.",
"title": "Security"
},
{
"paragraph_id": 53,
"text": "Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing the accidental installation of malware.",
"title": "Security"
},
{
"paragraph_id": 54,
"text": "Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware-infected.",
"title": "Security"
},
{
"paragraph_id": 55,
"text": "In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited.",
"title": "Security"
},
{
"paragraph_id": 56,
"text": "Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems that are in Microsoft's mainstream support phase.",
"title": "Security"
},
{
"paragraph_id": 57,
"text": "On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which \"could allow outside users to take control of a person's computer and steal their passwords.” Microsoft representatives countered this recommendation, claiming that \"0.02% of internet sites\" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update.",
"title": "Security"
},
{
"paragraph_id": 58,
"text": "In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised \"temporary use of alternative browsers\" because of a \"critical security hole\" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs.",
"title": "Security"
},
{
"paragraph_id": 59,
"text": "In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox.",
"title": "Security"
},
{
"paragraph_id": 60,
"text": "A 2017 browser security white paper comparing Google Chrome, Microsoft Edge, and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies.",
"title": "Security"
},
{
"paragraph_id": 61,
"text": "Internet Explorer has been subjected to many security vulnerabilities and concerns such that the volume of criticism for IE is unusually high. Much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page to install themselves. This is known as a \"drive-by install.” There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert.",
"title": "Security"
},
{
"paragraph_id": 62,
"text": "A number of security flaws affecting IE originated not in the browser itself, but in ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX had been overstated and there were safeguards in place. In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components. Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities.",
"title": "Security"
},
{
"paragraph_id": 63,
"text": "In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available. The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year.",
"title": "Security"
},
{
"paragraph_id": 64,
"text": "According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days. Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all.",
"title": "Security"
},
{
"paragraph_id": 65,
"text": "In December 2010, researchers were able to bypass the \"Protected Mode\" feature in Internet Explorer.",
"title": "Security"
},
{
"paragraph_id": 66,
"text": "In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 from on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008, and Server 2003, IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2).",
"title": "Security"
},
{
"paragraph_id": 67,
"text": "The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer. The Australian and French Government issued a similar warning a few days later.",
"title": "Security"
},
{
"paragraph_id": 68,
"text": "On April 26, 2014, Microsoft issued a security advisory relating to CVE-2014-1776 (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11), a vulnerability that could allow \"remote code execution\" in Internet Explorer versions 6 to 11. On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in \"the complete compromise\" of an affected system. US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed. The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date. Symantec, a cyber security firm, confirmed that \"the vulnerability crashes Internet Explorer on Windows XP.\" The vulnerability was resolved on May 1, 2014, with a security update.",
"title": "Security"
},
{
"paragraph_id": 69,
"text": "The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser.",
"title": "Market adoption and usage share"
},
{
"paragraph_id": 70,
"text": "Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape.",
"title": "Market adoption and usage share"
},
{
"paragraph_id": 71,
"text": "Firefox 1.0 had surpassed Internet Explorer 5 in early 2005, with Firefox 1.0 at 8 percent market share.",
"title": "Market adoption and usage share"
},
{
"paragraph_id": 72,
"text": "Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference.",
"title": "Market adoption and usage share"
},
{
"paragraph_id": 73,
"text": "According to StatCounter, Internet Explorer's market share fell below 50% in September 2010. In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter.",
"title": "Market adoption and usage share"
},
{
"paragraph_id": 74,
"text": "Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications.",
"title": "Market adoption and usage share"
},
{
"paragraph_id": 75,
"text": "While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one.",
"title": "Removal"
},
{
"paragraph_id": 76,
"text": "The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports.",
"title": "Removal"
},
{
"paragraph_id": 77,
"text": "The popularity of Internet Explorer led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself \"Internet Explorer – Emergency Mode\" appeared. It closely resembled the real Internet Explorer but had fewer buttons and no search bar. If a user attempted to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari, or the real Internet Explorer, this browser would be loaded instead. It also displayed a fake error message, claiming that the computer was infected with malware and Internet Explorer had entered \"Emergency Mode.” It blocked access to legitimate sites such as Google if the user tried to access them.",
"title": "Impersonation by malware"
}
]
| Internet Explorer is a retired series of graphical web browsers developed by Microsoft that were used in the Windows line of operating systems. While IE has been discontinued on most Windows editions, it remains supported on certain editions of Windows, such as Windows 10 LTSB/LTSC. Starting in 1995, it was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads or in-service packs and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. Microsoft spent over US$100 million per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999. New feature development for the browser was discontinued in 2016 and ended support on June 15, 2022 for Windows 10 Semi-Annual Channel (SAC), in favor of its successor, Microsoft Edge. Internet Explorer was once the most widely used web browser, attaining a peak of 95% usage share by 2003. It has since fallen out of general use after retirement. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launches of Firefox (2004) and Google Chrome (2008) and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019. Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX, and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile, made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7. The browser has been scrutinized throughout its development for its use of third-party technology and security and privacy vulnerabilities, and the United States and the European Union have determined that the integration of Internet Explorer with Windows has been to the detriment of fair browser competition. Internet Explorer 7 was supported on Windows Embedded Compact 2013 until October 10, 2023. The core of Internet Explorer 11 will continue being shipped and supported until at least 2029 as IE Mode, a feature of Microsoft Edge, enabling Edge to display web pages using Internet Explorer 11's Trident layout engine and other components. Through IE Mode, the underlying technology of Internet Explorer 11 partially exists on versions of Windows that do not support IE11 as a proper application, including newer versions of Windows 10, as well as Windows 11, Windows Server Insider Build 22463 and Windows Server Insider Build 25110. | 2001-11-02T05:39:28Z | 2023-12-31T23:07:35Z | [
"Template:Short description",
"Template:Use mdy dates",
"Template:Countries by most used web browser",
"Template:For",
"Template:Efn",
"Template:USD",
"Template:See also",
"Template:Wikibooks",
"Template:Refend",
"Template:Web browsers",
"Template:Timeline of web browsers",
"Template:CVE",
"Template:Portal",
"Template:Cite news",
"Template:Internet Explorer",
"Template:Microsoft Windows components",
"Template:Use American English",
"Template:Notelist",
"Template:Cite web",
"Template:Official website",
"Template:Better source needed",
"Template:Main",
"Template:Authority control",
"Template:Commons category",
"Template:Gopher clients",
"Template:Infobox software",
"Template:Citation needed",
"Template:Samp",
"Template:Reflist",
"Template:Cbignore",
"Template:Refbegin",
"Template:Aggregators"
]
| https://en.wikipedia.org/wiki/Internet_Explorer |
15,220 | Imprecise language | Imprecise language, informal spoken language, or everyday language is less precise than any more formal or academic languages.
Language might be said to be imprecise because it exhibits one or more of the following features:
While imprecise language is not desirable in various scientific fields, it may be helpful, illustrative or discussion-stimulative in other contexts. Imprecision in a discourse may or may not be the intention of the author(s) or speaker(s). The role of imprecision may depend on audience, end goal, extended context and subject matter. Relevant players and real stakes will also bear on truth-grounds of statements. | [
{
"paragraph_id": 0,
"text": "Imprecise language, informal spoken language, or everyday language is less precise than any more formal or academic languages.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Language might be said to be imprecise because it exhibits one or more of the following features:",
"title": ""
},
{
"paragraph_id": 2,
"text": "While imprecise language is not desirable in various scientific fields, it may be helpful, illustrative or discussion-stimulative in other contexts. Imprecision in a discourse may or may not be the intention of the author(s) or speaker(s). The role of imprecision may depend on audience, end goal, extended context and subject matter. Relevant players and real stakes will also bear on truth-grounds of statements.",
"title": ""
}
]
| Imprecise language, informal spoken language, or everyday language is less precise than any more formal or academic languages. Language might be said to be imprecise because it exhibits one or more of the following features: ambiguity – when a word or phrase pertains to its having more than one meaning in the language to which the word belongs.
vagueness – when borderline cases interfere with an interpretation.
equivocation – the misleading use of a term with more than one meaning or sense.
accent – when the use of bold or italics causes confusion over the meaning of a statement.
amphiboly – when a sentence may be interpreted in more than one way due to ambiguous sentence structure. While imprecise language is not desirable in various scientific fields, it may be helpful, illustrative or discussion-stimulative in other contexts. Imprecision in a discourse may or may not be the intention of the author(s) or speaker(s). The role of imprecision may depend on audience, end goal, extended context and subject matter. Relevant players and real stakes will also bear on truth-grounds of statements. | 2023-03-21T01:24:41Z | [
"Template:Cite web",
"Template:Cite journal",
"Template:Short description",
"Template:Multiple issues",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Imprecise_language |
|
15,221 | Intel 80188 | The Intel 80188 microprocessor was a variant of the Intel 80186. The 80188 had an 8-bit external data bus instead of the 16-bit bus of the 80186; this made it less expensive to connect to peripherals. The 16-bit registers and the one megabyte address range were unchanged, however. It had a throughput of 1 million instructions per second. Intel second sourced this microprocessor to Fujitsu Limited around 1985. Both packages of Intel 80188 version were available in 68-pin PLCC and PGA in sampling at third quarter of 1985. The available 80C188EB in fully static design for the application-specific standard product using the 1-micron CHMOS IV technology. They were available in 3- and 5-Volts version with 84-lead PLCC and 80-lead EIAJ QFP version. It was also available for USD $15.15 in 1,000 unit quantities.
The 80188 series was generally intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of chips required, it included features such as clock generator, interrupt controller, timers, wait state generator, DMA channels, and external chip select lines. While the N80188 was compatible with the 8087 numeric co-processor, the 80C188 was not. It did not have the ESC control codes integrated.
The initial clock rate of the 80188 was 6 MHz, but due to more hardware available for the microcode to use, especially for address calculation, many individual instructions ran faster than on an 8086 at the same clock frequency. For instance, the common register+immediate addressing mode was significantly faster than on the 8086, especially when a memory location was both (one of the) operand(s) and the destination. Multiply and divide also showed great improvement, being several times as fast as on the original 8086 and multi-bit shifts were done almost four times as quickly as in the 8086.
Along with hundreds of other processor models, Intel discontinued the 80188 processor 30 March 2006, after a life of about 24 years. | [
{
"paragraph_id": 0,
"text": "The Intel 80188 microprocessor was a variant of the Intel 80186. The 80188 had an 8-bit external data bus instead of the 16-bit bus of the 80186; this made it less expensive to connect to peripherals. The 16-bit registers and the one megabyte address range were unchanged, however. It had a throughput of 1 million instructions per second. Intel second sourced this microprocessor to Fujitsu Limited around 1985. Both packages of Intel 80188 version were available in 68-pin PLCC and PGA in sampling at third quarter of 1985. The available 80C188EB in fully static design for the application-specific standard product using the 1-micron CHMOS IV technology. They were available in 3- and 5-Volts version with 84-lead PLCC and 80-lead EIAJ QFP version. It was also available for USD $15.15 in 1,000 unit quantities.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The 80188 series was generally intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of chips required, it included features such as clock generator, interrupt controller, timers, wait state generator, DMA channels, and external chip select lines. While the N80188 was compatible with the 8087 numeric co-processor, the 80C188 was not. It did not have the ESC control codes integrated.",
"title": "Description"
},
{
"paragraph_id": 2,
"text": "The initial clock rate of the 80188 was 6 MHz, but due to more hardware available for the microcode to use, especially for address calculation, many individual instructions ran faster than on an 8086 at the same clock frequency. For instance, the common register+immediate addressing mode was significantly faster than on the 8086, especially when a memory location was both (one of the) operand(s) and the destination. Multiply and divide also showed great improvement, being several times as fast as on the original 8086 and multi-bit shifts were done almost four times as quickly as in the 8086.",
"title": "Description"
},
{
"paragraph_id": 3,
"text": "Along with hundreds of other processor models, Intel discontinued the 80188 processor 30 March 2006, after a life of about 24 years.",
"title": "Description"
}
]
| The Intel 80188 microprocessor was a variant of the Intel 80186. The 80188 had an 8-bit external data bus instead of the 16-bit bus of the 80186; this made it less expensive to connect to peripherals. The 16-bit registers and the one megabyte address range were unchanged, however. It had a throughput of 1 million instructions per second. Intel second sourced this microprocessor to Fujitsu Limited around 1985. Both packages of Intel 80188 version were available in 68-pin PLCC and PGA in sampling at third quarter of 1985. The available 80C188EB in fully static design for the application-specific standard product using the 1-micron CHMOS IV technology. They were available in 3- and 5-Volts version with 84-lead PLCC and 80-lead EIAJ QFP version. It was also available for USD $15.15 in 1,000 unit quantities. | 2001-11-05T16:40:29Z | 2023-09-13T20:26:50Z | [
"Template:Intel controllers",
"Template:Merge to",
"Template:Infobox CPU",
"Template:Efn",
"Template:Noteslist",
"Template:Reflist",
"Template:Cite web",
"Template:Intel processors"
]
| https://en.wikipedia.org/wiki/Intel_80188 |
15,222 | IEEE 802.2 | IEEE 802.2 is the original name of the ISO/IEC 8802-2 standard which defines logical link control (LLC) as the upper portion of the data link layer of the OSI Model. The original standard developed by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the American National Standards Institute (ANSI) was adopted by the International Organization for Standardization (ISO) in 1998, but it remains an integral part of the family of IEEE 802 standards for local and metropolitan networks.
LLC is a software component that provides a uniform interface to the user of the data link service, usually the network layer. LLC may offer three types of services:
Conversely, the LLC uses the services of the media access control (MAC), which is dependent on the specific transmission medium (Ethernet, Token Ring, FDDI, 802.11, etc.). Using LLC is compulsory for all IEEE 802 networks with the exception of Ethernet. It is also used in Fiber Distributed Data Interface (FDDI) which is not part of the IEEE 802 family.
The IEEE 802.2 sublayer adds some control information to the message created by the upper layer and passed to the LLC for transmission to another node on the same data link. The resulting packet is generally referred to as LLC protocol data unit (PDU) and the additional information added by the LLC sublayer is the LLC HEADER. The LLC Header consist of DSAP (Destination Service Access Point), SSAP (Source Service Access Point) and the Control field.
The two 8-bit fields DSAP and SSAP allow multiplexing of various upper layer protocols above LLC. However, many protocols use the Subnetwork Access Protocol (SNAP) extension which allows using EtherType values to specify the protocol being transported atop IEEE 802.2. It also allows vendors to define their own protocol value spaces.
The 8 or 16 bit HDLC-style Control field serves to distinguish communication mode, to specify a specific operation and to facilitate connection control and flow control (in connection mode) or acknowledgements (in acknowledged connectionless mode).
IEEE 802.2 provides two connectionless and one connection-oriented operational modes:
The use of multicasts and broadcasts reduces network traffic when the same information needs to be propagated to all stations of the network. However the Type 1 service provides no guarantees regarding the order of the received frames compared to the order in which they have been sent; the sender does not even get an acknowledgment that the frames have been received.
Each device conforming to the IEEE 802.2 standard must support service type 1. Each network node is assigned an LLC Class according to which service types it supports:
Any 802.2 LLC PDU has the following format:
When Subnetwork Access Protocol (SNAP) extension is used, it is located at the start of the Information field:
The 802.2 header includes two eight-bit address fields, called service access points (SAP) or collectively LSAP in the OSI terminology:
Although the LSAP fields are 8 bits long, the low-order bit is reserved for special purposes, leaving only 128 values available for most purposes.
The low-order bit of the DSAP indicates whether it contains an individual or a group address:
The low-order bit of the SSAP indicates whether the packet is a command or response packet:
The remaining 7 bits of the SSAP specify the LSAP (always an individual address) from which the packet was transmitted.
LSAP numbers are globally assigned by the IEEE to uniquely identify well established international standards.
The protocols or families of protocols which have assigned one or more SAPs may operate directly on top of 802.2 LLC. Other protocols may use the Subnetwork Access Protocol (SNAP) with IEEE 802.2 which is indicated by the hexadecimal value 0xAA (or 0xAB, if the source of a response) in SSAP and DSAP. The SNAP extension allows using EtherType values or private protocol ID spaces in all IEEE 802 networks. It can be used both in datagram and in connection-oriented network services.
Ethernet (IEEE 802.3) networks are an exception; the IEEE 802.3x-1997 standard explicitly allowed using of the Ethernet II framing, where the 16-bit field after the MAC addresses does not carry the length of the frame followed by the IEEE 802.2 LLC header, but the EtherType value followed by the upper layer data. With this framing only datagram services are supported on the data link layer.
Although IPv4 has been assigned an LSAP value of 6 (0x06) and ARP has been assigned an LSAP value of 152 (0x98), IPv4 is almost never directly encapsulated in 802.2 LLC frames without SNAP headers. Instead, the Internet standard RFC 1042 is usually used for encapsulating IPv4 traffic in 802.2 LLC frames with SNAP headers on FDDI and on IEEE 802 networks other than Ethernet. Ethernet networks typically use Ethernet II framing with EtherType 0x800 for IP and 0x806 for ARP.
The IPX protocol used by Novell NetWare networks supports an additional Ethernet frame type, 802.3 raw, ultimately supporting four frame types on Ethernet (802.3 raw, 802.2 LLC, 802.2 SNAP, and Ethernet II) and two frame types on FDDI and other (non-Ethernet) IEEE 802 networks (802.2 LLC and 802.2 SNAP).
It is possible to use diverse framings on a single network. It is possible to do it even for the same upper layer protocol, but in such a case the nodes using unlike framings cannot directly communicate with each other.
Following the destination and source SAP fields is a control field. IEEE 802.2 was conceptually derived from HDLC, and has the same three types of PDUs:
To carry data in the most-often used unacknowledged connectionless mode the U-format is used. It is identified by the value '11' in lower two bits of the single-byte control field. | [
{
"paragraph_id": 0,
"text": "IEEE 802.2 is the original name of the ISO/IEC 8802-2 standard which defines logical link control (LLC) as the upper portion of the data link layer of the OSI Model. The original standard developed by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the American National Standards Institute (ANSI) was adopted by the International Organization for Standardization (ISO) in 1998, but it remains an integral part of the family of IEEE 802 standards for local and metropolitan networks.",
"title": ""
},
{
"paragraph_id": 1,
"text": "LLC is a software component that provides a uniform interface to the user of the data link service, usually the network layer. LLC may offer three types of services:",
"title": ""
},
{
"paragraph_id": 2,
"text": "Conversely, the LLC uses the services of the media access control (MAC), which is dependent on the specific transmission medium (Ethernet, Token Ring, FDDI, 802.11, etc.). Using LLC is compulsory for all IEEE 802 networks with the exception of Ethernet. It is also used in Fiber Distributed Data Interface (FDDI) which is not part of the IEEE 802 family.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The IEEE 802.2 sublayer adds some control information to the message created by the upper layer and passed to the LLC for transmission to another node on the same data link. The resulting packet is generally referred to as LLC protocol data unit (PDU) and the additional information added by the LLC sublayer is the LLC HEADER. The LLC Header consist of DSAP (Destination Service Access Point), SSAP (Source Service Access Point) and the Control field.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The two 8-bit fields DSAP and SSAP allow multiplexing of various upper layer protocols above LLC. However, many protocols use the Subnetwork Access Protocol (SNAP) extension which allows using EtherType values to specify the protocol being transported atop IEEE 802.2. It also allows vendors to define their own protocol value spaces.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The 8 or 16 bit HDLC-style Control field serves to distinguish communication mode, to specify a specific operation and to facilitate connection control and flow control (in connection mode) or acknowledgements (in acknowledged connectionless mode).",
"title": ""
},
{
"paragraph_id": 6,
"text": "IEEE 802.2 provides two connectionless and one connection-oriented operational modes:",
"title": "Operational modes"
},
{
"paragraph_id": 7,
"text": "The use of multicasts and broadcasts reduces network traffic when the same information needs to be propagated to all stations of the network. However the Type 1 service provides no guarantees regarding the order of the received frames compared to the order in which they have been sent; the sender does not even get an acknowledgment that the frames have been received.",
"title": "Operational modes"
},
{
"paragraph_id": 8,
"text": "Each device conforming to the IEEE 802.2 standard must support service type 1. Each network node is assigned an LLC Class according to which service types it supports:",
"title": "Operational modes"
},
{
"paragraph_id": 9,
"text": "Any 802.2 LLC PDU has the following format:",
"title": "LLC header"
},
{
"paragraph_id": 10,
"text": "When Subnetwork Access Protocol (SNAP) extension is used, it is located at the start of the Information field:",
"title": "LLC header"
},
{
"paragraph_id": 11,
"text": "The 802.2 header includes two eight-bit address fields, called service access points (SAP) or collectively LSAP in the OSI terminology:",
"title": "LLC header"
},
{
"paragraph_id": 12,
"text": "Although the LSAP fields are 8 bits long, the low-order bit is reserved for special purposes, leaving only 128 values available for most purposes.",
"title": "LLC header"
},
{
"paragraph_id": 13,
"text": "The low-order bit of the DSAP indicates whether it contains an individual or a group address:",
"title": "LLC header"
},
{
"paragraph_id": 14,
"text": "The low-order bit of the SSAP indicates whether the packet is a command or response packet:",
"title": "LLC header"
},
{
"paragraph_id": 15,
"text": "The remaining 7 bits of the SSAP specify the LSAP (always an individual address) from which the packet was transmitted.",
"title": "LLC header"
},
{
"paragraph_id": 16,
"text": "LSAP numbers are globally assigned by the IEEE to uniquely identify well established international standards.",
"title": "LLC header"
},
{
"paragraph_id": 17,
"text": "The protocols or families of protocols which have assigned one or more SAPs may operate directly on top of 802.2 LLC. Other protocols may use the Subnetwork Access Protocol (SNAP) with IEEE 802.2 which is indicated by the hexadecimal value 0xAA (or 0xAB, if the source of a response) in SSAP and DSAP. The SNAP extension allows using EtherType values or private protocol ID spaces in all IEEE 802 networks. It can be used both in datagram and in connection-oriented network services.",
"title": "LLC header"
},
{
"paragraph_id": 18,
"text": "Ethernet (IEEE 802.3) networks are an exception; the IEEE 802.3x-1997 standard explicitly allowed using of the Ethernet II framing, where the 16-bit field after the MAC addresses does not carry the length of the frame followed by the IEEE 802.2 LLC header, but the EtherType value followed by the upper layer data. With this framing only datagram services are supported on the data link layer.",
"title": "LLC header"
},
{
"paragraph_id": 19,
"text": "Although IPv4 has been assigned an LSAP value of 6 (0x06) and ARP has been assigned an LSAP value of 152 (0x98), IPv4 is almost never directly encapsulated in 802.2 LLC frames without SNAP headers. Instead, the Internet standard RFC 1042 is usually used for encapsulating IPv4 traffic in 802.2 LLC frames with SNAP headers on FDDI and on IEEE 802 networks other than Ethernet. Ethernet networks typically use Ethernet II framing with EtherType 0x800 for IP and 0x806 for ARP.",
"title": "LLC header"
},
{
"paragraph_id": 20,
"text": "The IPX protocol used by Novell NetWare networks supports an additional Ethernet frame type, 802.3 raw, ultimately supporting four frame types on Ethernet (802.3 raw, 802.2 LLC, 802.2 SNAP, and Ethernet II) and two frame types on FDDI and other (non-Ethernet) IEEE 802 networks (802.2 LLC and 802.2 SNAP).",
"title": "LLC header"
},
{
"paragraph_id": 21,
"text": "It is possible to use diverse framings on a single network. It is possible to do it even for the same upper layer protocol, but in such a case the nodes using unlike framings cannot directly communicate with each other.",
"title": "LLC header"
},
{
"paragraph_id": 22,
"text": "Following the destination and source SAP fields is a control field. IEEE 802.2 was conceptually derived from HDLC, and has the same three types of PDUs:",
"title": "LLC header"
},
{
"paragraph_id": 23,
"text": "To carry data in the most-often used unacknowledged connectionless mode the U-format is used. It is identified by the value '11' in lower two bits of the single-byte control field.",
"title": "LLC header"
}
]
| IEEE 802.2 is the original name of the ISO/IEC 8802-2 standard which defines logical link control (LLC) as the upper portion of the data link layer of the OSI Model. The original standard developed by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the American National Standards Institute (ANSI) was adopted by the International Organization for Standardization (ISO) in 1998, but it remains an integral part of the family of IEEE 802 standards for local and metropolitan networks. LLC is a software component that provides a uniform interface to the user of the data link service, usually the network layer. LLC may offer three types of services: Unacknowledged connectionless mode services (mandatory)
Connection mode services (optional)
Acknowledged connectionless mode services (optional) Conversely, the LLC uses the services of the media access control (MAC), which is dependent on the specific transmission medium. Using LLC is compulsory for all IEEE 802 networks with the exception of Ethernet. It is also used in Fiber Distributed Data Interface (FDDI) which is not part of the IEEE 802 family. The IEEE 802.2 sublayer adds some control information to the message created by the upper layer and passed to the LLC for transmission to another node on the same data link. The resulting packet is generally referred to as LLC protocol data unit (PDU) and the additional information added by the LLC sublayer is the LLC HEADER. The LLC Header consist of DSAP, SSAP and the Control field. The two 8-bit fields DSAP and SSAP allow multiplexing of various upper layer protocols above LLC. However, many protocols use the Subnetwork Access Protocol (SNAP) extension which allows using EtherType values to specify the protocol being transported atop IEEE 802.2. It also allows vendors to define their own protocol value spaces. The 8 or 16 bit HDLC-style Control field serves to distinguish communication mode, to specify a specific operation and to facilitate connection control and flow control or acknowledgements. | 2001-11-05T16:45:22Z | 2023-11-07T08:53:21Z | [
"Template:Reflist",
"Template:Cite book",
"Template:Cite IETF",
"Template:Citation",
"Template:Cite web",
"Template:IEEE standards"
]
| https://en.wikipedia.org/wiki/IEEE_802.2 |
15,223 | Invertebrate | Invertebrates is an umbrella term describing animals that neither develop nor retain a vertebral column (commonly known as a spine or backbone), which evolved from the notochord. It is a paraphyletic grouping including all animals excluding the chordate subphylum Vertebrata, i.e. vertebrates. Well-known phyla of invertebrates include arthropods, mollusks, annelids, echinoderms, flatworms, cnidarians and sponges.
The majority of animal species are invertebrates; one estimate puts the figure at 97%. Many invertebrate taxa have a greater number and diversity of species than the entire subphylum of Vertebrata. Invertebrates vary widely in size, from 50 μm (0.002 in) rotifers to the 9–10 m (30–33 ft) colossal squid.
Some so-called invertebrates, such as the Tunicata and Cephalochordata, are actually sister chordate subphyla to Vertebrata, being more closely related to vertebrates than to other invertebrates. This makes the term "invertebrates" rather polyphyletic, so the term has little meaning in taxonomy.
The word "invertebrate" comes from the Latin word vertebra, which means a joint in general, and sometimes specifically a joint from the spinal column of a vertebrate. The jointed aspect of vertebra is derived from the concept of turning, expressed in the root verto or vorto, to turn. The prefix in- means "not" or "without".
The term invertebrates is not always precise among non-biologists since it does not accurately describe a taxon in the same way that Arthropoda, Vertebrata or Manidae do. Each of these terms describes a valid taxon, phylum, subphylum or family. "Invertebrata" is a term of convenience, not a taxon; it has very little circumscriptional significance except within the Chordata. The Vertebrata as a subphylum comprises such a small proportion of the Metazoa that to speak of the kingdom Animalia in terms of "Vertebrata" and "Invertebrata" has limited practicality. In the more formal taxonomy of Animalia other attributes that logically should precede the presence or absence of the vertebral column in constructing a cladogram, for example, the presence of a notochord. That would at least circumscribe the Chordata. However, even the notochord would be a less fundamental criterion than aspects of embryological development and symmetry or perhaps bauplan.
Despite this, the concept of invertebrates as a taxon of animals has persisted for over a century among the laity, and within the zoological community and in its literature it remains in use as a term of convenience for animals that are not members of the Vertebrata. The following text reflects earlier scientific understanding of the term and of those animals which have constituted it. According to this understanding, invertebrates do not possess a skeleton of bone, either internal or external. They include hugely varied body plans. Many have fluid-filled, hydrostatic skeletons, like jellyfish or worms. Others have hard exoskeletons, outer shells like those of insects and crustaceans. The most familiar invertebrates include the Protozoa, Porifera, Coelenterata, Platyhelminthes, Nematoda, Annelida, Echinodermata, Mollusca and Arthropoda. Arthropoda include insects, crustaceans and arachnids.
By far the largest number of described invertebrate species are insects. The following table lists the number of described extant species for major invertebrate groups as estimated in the IUCN Red List of Threatened Species, 2014.3.
The IUCN estimates that 66,178 extant vertebrate species have been described, which means that over 95% of the described animal species in the world are invertebrates.
The trait that is common to all invertebrates is the absence of a vertebral column (backbone): this creates a distinction between invertebrates and vertebrates. The distinction is one of convenience only; it is not based on any clear biologically homologous trait, any more than the common trait of having wings functionally unites insects, bats, and birds, or than not having wings unites tortoises, snails and sponges. Being animals, invertebrates are heterotrophs, and require sustenance in the form of the consumption of other organisms. With a few exceptions, such as the Porifera, invertebrates generally have bodies composed of differentiated tissues. There is also typically a digestive chamber with one or two openings to the exterior.
The body plans of most multicellular organisms exhibit some form of symmetry, whether radial, bilateral, or spherical. A minority, however, exhibit no symmetry. One example of asymmetric invertebrates includes all gastropod species. This is easily seen in snails and sea snails, which have helical shells. Slugs appear externally symmetrical, but their pneumostome (breathing hole) is located on the right side. Other gastropods develop external asymmetry, such as Glaucus atlanticus that develops asymmetrical cerata as they mature. The origin of gastropod asymmetry is a subject of scientific debate.
Other examples of asymmetry are found in fiddler crabs and hermit crabs. They often have one claw much larger than the other. If a male fiddler loses its large claw, it will grow another on the opposite side after moulting. Sessile animals such as sponges are asymmetrical alongside coral colonies (with the exception of the individual polyps that exhibit radial symmetry); alpheidae claws that lack pincers; and some copepods, polyopisthocotyleans, and monogeneans which parasitize by attachment or residency within the gill chamber of their fish hosts).
Neurons differ in invertebrates from mammalian cells. Invertebrates cells fire in response to similar stimuli as mammals, such as tissue trauma, high temperature, or changes in pH. The first invertebrate in which a neuron cell was identified was the medicinal leech, Hirudo medicinalis.
Learning and memory using nociceptors in the sea hare, Aplysia has been described. Mollusk neurons are able to detect increasing pressures and tissue trauma.
Neurons have been identified in a wide range of invertebrate species, including annelids, molluscs, nematodes and arthropods.
One type of invertebrate respiratory system is the open respiratory system composed of spiracles, tracheae, and tracheoles that terrestrial arthropods have to transport metabolic gases to and from tissues. The distribution of spiracles can vary greatly among the many orders of insects, but in general each segment of the body can have only one pair of spiracles, each of which connects to an atrium and has a relatively large tracheal tube behind it. The tracheae are invaginations of the cuticular exoskeleton that branch (anastomose) throughout the body with diameters from only a few micrometres up to 0.8 mm. The smallest tubes, tracheoles, penetrate cells and serve as sites of diffusion for water, oxygen, and carbon dioxide. Gas may be conducted through the respiratory system by means of active ventilation or passive diffusion. Unlike vertebrates, insects do not generally carry oxygen in their haemolymph.
A tracheal tube may contain ridge-like circumferential rings of taenidia in various geometries such as loops or helices. In the head, thorax, or abdomen, tracheae may also be connected to air sacs. Many insects, such as grasshoppers and bees, which actively pump the air sacs in their abdomen, are able to control the flow of air through their body. In some aquatic insects, the tracheae exchange gas through the body wall directly, in the form of a gill, or function essentially as normal, via a plastron. Despite being internal, the tracheae of arthropods are shed during moulting (ecdysis).
Only vertebrate animals have ears, though many invertebrates detect sound using other kinds of sense organs. In insects, tympanal organs are used to hear distant sounds. They are located either on the head or elsewhere, depending on the insect family. The tympanal organs of some insects are extremely sensitive, offering acute hearing beyond that of most other animals. The female cricket fly Ormia ochracea has tympanal organs on each side of her abdomen. They are connected by a thin bridge of exoskeleton and they function like a tiny pair of eardrums, but, because they are linked, they provide acute directional information. The fly uses her "ears" to detect the call of her host, a male cricket. Depending on where the song of the cricket is coming from, the fly's hearing organs will reverberate at slightly different frequencies. This difference may be as little as 50 billionths of a second, but it is enough to allow the fly to home in directly on a singing male cricket and parasitise it.
Like vertebrates, most invertebrates reproduce at least partly through sexual reproduction. They produce specialized reproductive cells that undergo meiosis to produce smaller, motile spermatozoa or larger, non-motile ova. These fuse to form zygotes, which develop into new individuals. Others are capable of asexual reproduction, or sometimes, both methods of reproduction.
Extensive research with model invertebrate species such as Drosophila melanogaster and Caenorhabditis elegans has contributed much to our understanding of meiosis and reproduction. However, beyond the few model systems, the modes of reproduction found in invertebrates show incredible diversity. In one extreme example it is estimated that 10% of orbatid mite species have persisted without sexual reproduction and have reproduced asexually for more than 400 million years.
Social behavior is widespread in invertebrates, including cockroaches, termites, aphids, thrips, ants, bees, Passalidae, Acari, spiders, and more. Social interaction is particularly salient in eusocial species but applies to other invertebrates as well.
Insects recognize information transmitted by other insects.
The term invertebrates covers several phyla. One of these are the sponges (Porifera). They were long thought to have diverged from other animals early. They lack the complex organization found in most other phyla. Their cells are differentiated, but in most cases not organized into distinct tissues. Sponges typically feed by drawing in water through pores. Some speculate that sponges are not so primitive, but may instead be secondarily simplified. The Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the anus. Both have distinct tissues, but they are not organized into organs. There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, they are sometimes called diploblastic.
The Echinodermata are radially symmetric and exclusively marine, including starfish (Asteroidea), sea urchins, (Echinoidea), brittle stars (Ophiuroidea), sea cucumbers (Holothuroidea) and feather stars (Crinoidea).
The largest animal phylum is also included within invertebrates: the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. In addition, they possess a hardened exoskeleton that is periodically shed during growth. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share some traits with them, excluding the hardened exoskeleton. The Nematoda or roundworms, are perhaps the second largest animal phylum, and are also invertebrates. Roundworms are typically microscopic, and occur in nearly every environment where there is water. A number are important parasites. Smaller phyla related to them are the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Other invertebrates include the Nemertea or ribbon worms, and the Sipuncula.
Another phylum is Platyhelminthes, the flatworms. These were originally considered primitive, but it now appears they developed from more complex ancestors. Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha. The Rotifera or rotifers, are common in aqueous environments. Invertebrates also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and the Cycliophora.
Also included are two of the most successful animal phyla, the Mollusca and Annelida. The former, which is the second-largest animal phylum by number of described species, includes animals such as snails, clams, and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered closer to the arthropods because they are both segmented. Now, this is generally considered convergent evolution, owing to many morphological and genetic differences between the two phyla.
Among lesser phyla of invertebrates are the Hemichordata, or acorn worms, and the Chaetognatha, or arrow worms. Other phyla include Acoelomorpha, Brachiopoda, Bryozoa, Entoprocta, Phoronida, and Xenoturbellida.
Invertebrates can be classified into several main categories, some of which are taxonomically obsolescent or debatable, but still used as terms of convenience. Each however appears in its own article at the following links.
The earliest animal fossils appear to be those of invertebrates. 665-million-year-old fossils in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia have been interpreted as being early sponges. Some paleontologists suggest that animals appeared much earlier, possibly as early as 1 billion years ago though they probably became multicellular in the Tonian. Trace fossils such as tracks and burrows found in the late Neoproterozoic era indicate the presence of triploblastic worms, roughly as large (about 5 mm wide) and complex as earthworms.
Around 453 MYA, animals began diversifying, and many of the important groups of invertebrates diverged from one another. Fossils of invertebrates are found in various types of sediment from the Phanerozoic. Fossils of invertebrates are commonly used in stratigraphy.
Carl Linnaeus divided these animals into only two groups, the Insecta and the now-obsolete Vermes (worms). Jean-Baptiste Lamarck, who was appointed to the position of "Curator of Insecta and Vermes" at the Muséum National d'Histoire Naturelle in 1793, both coined the term "invertebrate" to describe such animals and divided the original two groups into ten, by splitting Arachnida and Crustacea from the Linnean Insecta, and Mollusca, Annelida, Cirripedia, Radiata, Coelenterata and Infusoria from the Linnean Vermes. They are now classified into over 30 phyla, from simple organisms such as sea sponges and flatworms to complex animals such as arthropods and molluscs.
Invertebrates are animals without a vertebral column. This has led to the conclusion that invertebrates are a group that deviates from the normal, vertebrates. This has been said to be because researchers in the past, such as Lamarck, viewed vertebrates as a "standard": in Lamarck's theory of evolution, he believed that characteristics acquired through the evolutionary process involved not only survival, but also progression toward a "higher form", to which humans and vertebrates were closer than invertebrates were. Although goal-directed evolution has been abandoned, the distinction of invertebrates and vertebrates persists to this day, even though the grouping has been noted to be "hardly natural or even very sharp." Another reason cited for this continued distinction is that Lamarck created a precedent through his classifications which is now difficult to escape from. It is also possible that some humans believe that, they themselves being vertebrates, the group deserves more attention than invertebrates. In any event, in the 1968 edition of Invertebrate Zoology, it is noted that "division of the Animal Kingdom into vertebrates and invertebrates is artificial and reflects human bias in favor of man's own relatives." The book also points out that the group lumps a vast number of species together, so that no one characteristic describes all invertebrates. In addition, some species included are only remotely related to one another, with some more related to vertebrates than other invertebrates (see Paraphyly).
For many centuries, invertebrates were neglected by biologists, in favor of big vertebrates and "useful" or charismatic species. Invertebrate biology was not a major field of study until the work of Linnaeus and Lamarck in the 18th century. During the 20th century, invertebrate zoology became one of the major fields of natural sciences, with prominent discoveries in the fields of medicine, genetics, palaeontology, and ecology. The study of invertebrates has also benefited law enforcement, as arthropods, and especially insects, were discovered to be a source of information for forensic investigators.
Two of the most commonly studied model organisms nowadays are invertebrates: the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans. They have long been the most intensively studied model organisms, and were among the first life-forms to be genetically sequenced. This was facilitated by the severely reduced state of their genomes, but many genes, introns, and linkages have been lost. Analysis of the starlet sea anemone genome has emphasised the importance of sponges, placozoans, and choanoflagellates, also being sequenced, in explaining the arrival of 1500 ancestral genes unique to animals. Invertebrates are also used by scientists in the field of aquatic biomonitoring to evaluate the effects of water pollution and climate change. | [
{
"paragraph_id": 0,
"text": "Invertebrates is an umbrella term describing animals that neither develop nor retain a vertebral column (commonly known as a spine or backbone), which evolved from the notochord. It is a paraphyletic grouping including all animals excluding the chordate subphylum Vertebrata, i.e. vertebrates. Well-known phyla of invertebrates include arthropods, mollusks, annelids, echinoderms, flatworms, cnidarians and sponges.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The majority of animal species are invertebrates; one estimate puts the figure at 97%. Many invertebrate taxa have a greater number and diversity of species than the entire subphylum of Vertebrata. Invertebrates vary widely in size, from 50 μm (0.002 in) rotifers to the 9–10 m (30–33 ft) colossal squid.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Some so-called invertebrates, such as the Tunicata and Cephalochordata, are actually sister chordate subphyla to Vertebrata, being more closely related to vertebrates than to other invertebrates. This makes the term \"invertebrates\" rather polyphyletic, so the term has little meaning in taxonomy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word \"invertebrate\" comes from the Latin word vertebra, which means a joint in general, and sometimes specifically a joint from the spinal column of a vertebrate. The jointed aspect of vertebra is derived from the concept of turning, expressed in the root verto or vorto, to turn. The prefix in- means \"not\" or \"without\".",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The term invertebrates is not always precise among non-biologists since it does not accurately describe a taxon in the same way that Arthropoda, Vertebrata or Manidae do. Each of these terms describes a valid taxon, phylum, subphylum or family. \"Invertebrata\" is a term of convenience, not a taxon; it has very little circumscriptional significance except within the Chordata. The Vertebrata as a subphylum comprises such a small proportion of the Metazoa that to speak of the kingdom Animalia in terms of \"Vertebrata\" and \"Invertebrata\" has limited practicality. In the more formal taxonomy of Animalia other attributes that logically should precede the presence or absence of the vertebral column in constructing a cladogram, for example, the presence of a notochord. That would at least circumscribe the Chordata. However, even the notochord would be a less fundamental criterion than aspects of embryological development and symmetry or perhaps bauplan.",
"title": "Taxonomic significance"
},
{
"paragraph_id": 5,
"text": "Despite this, the concept of invertebrates as a taxon of animals has persisted for over a century among the laity, and within the zoological community and in its literature it remains in use as a term of convenience for animals that are not members of the Vertebrata. The following text reflects earlier scientific understanding of the term and of those animals which have constituted it. According to this understanding, invertebrates do not possess a skeleton of bone, either internal or external. They include hugely varied body plans. Many have fluid-filled, hydrostatic skeletons, like jellyfish or worms. Others have hard exoskeletons, outer shells like those of insects and crustaceans. The most familiar invertebrates include the Protozoa, Porifera, Coelenterata, Platyhelminthes, Nematoda, Annelida, Echinodermata, Mollusca and Arthropoda. Arthropoda include insects, crustaceans and arachnids.",
"title": "Taxonomic significance"
},
{
"paragraph_id": 6,
"text": "By far the largest number of described invertebrate species are insects. The following table lists the number of described extant species for major invertebrate groups as estimated in the IUCN Red List of Threatened Species, 2014.3.",
"title": "Number of extant species"
},
{
"paragraph_id": 7,
"text": "The IUCN estimates that 66,178 extant vertebrate species have been described, which means that over 95% of the described animal species in the world are invertebrates.",
"title": "Number of extant species"
},
{
"paragraph_id": 8,
"text": "The trait that is common to all invertebrates is the absence of a vertebral column (backbone): this creates a distinction between invertebrates and vertebrates. The distinction is one of convenience only; it is not based on any clear biologically homologous trait, any more than the common trait of having wings functionally unites insects, bats, and birds, or than not having wings unites tortoises, snails and sponges. Being animals, invertebrates are heterotrophs, and require sustenance in the form of the consumption of other organisms. With a few exceptions, such as the Porifera, invertebrates generally have bodies composed of differentiated tissues. There is also typically a digestive chamber with one or two openings to the exterior.",
"title": "Characteristics"
},
{
"paragraph_id": 9,
"text": "The body plans of most multicellular organisms exhibit some form of symmetry, whether radial, bilateral, or spherical. A minority, however, exhibit no symmetry. One example of asymmetric invertebrates includes all gastropod species. This is easily seen in snails and sea snails, which have helical shells. Slugs appear externally symmetrical, but their pneumostome (breathing hole) is located on the right side. Other gastropods develop external asymmetry, such as Glaucus atlanticus that develops asymmetrical cerata as they mature. The origin of gastropod asymmetry is a subject of scientific debate.",
"title": "Characteristics"
},
{
"paragraph_id": 10,
"text": "Other examples of asymmetry are found in fiddler crabs and hermit crabs. They often have one claw much larger than the other. If a male fiddler loses its large claw, it will grow another on the opposite side after moulting. Sessile animals such as sponges are asymmetrical alongside coral colonies (with the exception of the individual polyps that exhibit radial symmetry); alpheidae claws that lack pincers; and some copepods, polyopisthocotyleans, and monogeneans which parasitize by attachment or residency within the gill chamber of their fish hosts).",
"title": "Characteristics"
},
{
"paragraph_id": 11,
"text": "Neurons differ in invertebrates from mammalian cells. Invertebrates cells fire in response to similar stimuli as mammals, such as tissue trauma, high temperature, or changes in pH. The first invertebrate in which a neuron cell was identified was the medicinal leech, Hirudo medicinalis.",
"title": "Characteristics"
},
{
"paragraph_id": 12,
"text": "Learning and memory using nociceptors in the sea hare, Aplysia has been described. Mollusk neurons are able to detect increasing pressures and tissue trauma.",
"title": "Characteristics"
},
{
"paragraph_id": 13,
"text": "Neurons have been identified in a wide range of invertebrate species, including annelids, molluscs, nematodes and arthropods.",
"title": "Characteristics"
},
{
"paragraph_id": 14,
"text": "One type of invertebrate respiratory system is the open respiratory system composed of spiracles, tracheae, and tracheoles that terrestrial arthropods have to transport metabolic gases to and from tissues. The distribution of spiracles can vary greatly among the many orders of insects, but in general each segment of the body can have only one pair of spiracles, each of which connects to an atrium and has a relatively large tracheal tube behind it. The tracheae are invaginations of the cuticular exoskeleton that branch (anastomose) throughout the body with diameters from only a few micrometres up to 0.8 mm. The smallest tubes, tracheoles, penetrate cells and serve as sites of diffusion for water, oxygen, and carbon dioxide. Gas may be conducted through the respiratory system by means of active ventilation or passive diffusion. Unlike vertebrates, insects do not generally carry oxygen in their haemolymph.",
"title": "Characteristics"
},
{
"paragraph_id": 15,
"text": "A tracheal tube may contain ridge-like circumferential rings of taenidia in various geometries such as loops or helices. In the head, thorax, or abdomen, tracheae may also be connected to air sacs. Many insects, such as grasshoppers and bees, which actively pump the air sacs in their abdomen, are able to control the flow of air through their body. In some aquatic insects, the tracheae exchange gas through the body wall directly, in the form of a gill, or function essentially as normal, via a plastron. Despite being internal, the tracheae of arthropods are shed during moulting (ecdysis).",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "Only vertebrate animals have ears, though many invertebrates detect sound using other kinds of sense organs. In insects, tympanal organs are used to hear distant sounds. They are located either on the head or elsewhere, depending on the insect family. The tympanal organs of some insects are extremely sensitive, offering acute hearing beyond that of most other animals. The female cricket fly Ormia ochracea has tympanal organs on each side of her abdomen. They are connected by a thin bridge of exoskeleton and they function like a tiny pair of eardrums, but, because they are linked, they provide acute directional information. The fly uses her \"ears\" to detect the call of her host, a male cricket. Depending on where the song of the cricket is coming from, the fly's hearing organs will reverberate at slightly different frequencies. This difference may be as little as 50 billionths of a second, but it is enough to allow the fly to home in directly on a singing male cricket and parasitise it.",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "Like vertebrates, most invertebrates reproduce at least partly through sexual reproduction. They produce specialized reproductive cells that undergo meiosis to produce smaller, motile spermatozoa or larger, non-motile ova. These fuse to form zygotes, which develop into new individuals. Others are capable of asexual reproduction, or sometimes, both methods of reproduction.",
"title": "Characteristics"
},
{
"paragraph_id": 18,
"text": "Extensive research with model invertebrate species such as Drosophila melanogaster and Caenorhabditis elegans has contributed much to our understanding of meiosis and reproduction. However, beyond the few model systems, the modes of reproduction found in invertebrates show incredible diversity. In one extreme example it is estimated that 10% of orbatid mite species have persisted without sexual reproduction and have reproduced asexually for more than 400 million years.",
"title": "Characteristics"
},
{
"paragraph_id": 19,
"text": "Social behavior is widespread in invertebrates, including cockroaches, termites, aphids, thrips, ants, bees, Passalidae, Acari, spiders, and more. Social interaction is particularly salient in eusocial species but applies to other invertebrates as well.",
"title": "Characteristics"
},
{
"paragraph_id": 20,
"text": "Insects recognize information transmitted by other insects.",
"title": "Characteristics"
},
{
"paragraph_id": 21,
"text": "The term invertebrates covers several phyla. One of these are the sponges (Porifera). They were long thought to have diverged from other animals early. They lack the complex organization found in most other phyla. Their cells are differentiated, but in most cases not organized into distinct tissues. Sponges typically feed by drawing in water through pores. Some speculate that sponges are not so primitive, but may instead be secondarily simplified. The Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the anus. Both have distinct tissues, but they are not organized into organs. There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, they are sometimes called diploblastic.",
"title": "Characteristics"
},
{
"paragraph_id": 22,
"text": "The Echinodermata are radially symmetric and exclusively marine, including starfish (Asteroidea), sea urchins, (Echinoidea), brittle stars (Ophiuroidea), sea cucumbers (Holothuroidea) and feather stars (Crinoidea).",
"title": "Characteristics"
},
{
"paragraph_id": 23,
"text": "The largest animal phylum is also included within invertebrates: the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. In addition, they possess a hardened exoskeleton that is periodically shed during growth. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share some traits with them, excluding the hardened exoskeleton. The Nematoda or roundworms, are perhaps the second largest animal phylum, and are also invertebrates. Roundworms are typically microscopic, and occur in nearly every environment where there is water. A number are important parasites. Smaller phyla related to them are the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Other invertebrates include the Nemertea or ribbon worms, and the Sipuncula.",
"title": "Characteristics"
},
{
"paragraph_id": 24,
"text": "Another phylum is Platyhelminthes, the flatworms. These were originally considered primitive, but it now appears they developed from more complex ancestors. Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha. The Rotifera or rotifers, are common in aqueous environments. Invertebrates also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and the Cycliophora.",
"title": "Characteristics"
},
{
"paragraph_id": 25,
"text": "Also included are two of the most successful animal phyla, the Mollusca and Annelida. The former, which is the second-largest animal phylum by number of described species, includes animals such as snails, clams, and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered closer to the arthropods because they are both segmented. Now, this is generally considered convergent evolution, owing to many morphological and genetic differences between the two phyla.",
"title": "Characteristics"
},
{
"paragraph_id": 26,
"text": "Among lesser phyla of invertebrates are the Hemichordata, or acorn worms, and the Chaetognatha, or arrow worms. Other phyla include Acoelomorpha, Brachiopoda, Bryozoa, Entoprocta, Phoronida, and Xenoturbellida.",
"title": "Characteristics"
},
{
"paragraph_id": 27,
"text": "Invertebrates can be classified into several main categories, some of which are taxonomically obsolescent or debatable, but still used as terms of convenience. Each however appears in its own article at the following links.",
"title": "Classification of invertebrates"
},
{
"paragraph_id": 28,
"text": "The earliest animal fossils appear to be those of invertebrates. 665-million-year-old fossils in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia have been interpreted as being early sponges. Some paleontologists suggest that animals appeared much earlier, possibly as early as 1 billion years ago though they probably became multicellular in the Tonian. Trace fossils such as tracks and burrows found in the late Neoproterozoic era indicate the presence of triploblastic worms, roughly as large (about 5 mm wide) and complex as earthworms.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Around 453 MYA, animals began diversifying, and many of the important groups of invertebrates diverged from one another. Fossils of invertebrates are found in various types of sediment from the Phanerozoic. Fossils of invertebrates are commonly used in stratigraphy.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Carl Linnaeus divided these animals into only two groups, the Insecta and the now-obsolete Vermes (worms). Jean-Baptiste Lamarck, who was appointed to the position of \"Curator of Insecta and Vermes\" at the Muséum National d'Histoire Naturelle in 1793, both coined the term \"invertebrate\" to describe such animals and divided the original two groups into ten, by splitting Arachnida and Crustacea from the Linnean Insecta, and Mollusca, Annelida, Cirripedia, Radiata, Coelenterata and Infusoria from the Linnean Vermes. They are now classified into over 30 phyla, from simple organisms such as sea sponges and flatworms to complex animals such as arthropods and molluscs.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Invertebrates are animals without a vertebral column. This has led to the conclusion that invertebrates are a group that deviates from the normal, vertebrates. This has been said to be because researchers in the past, such as Lamarck, viewed vertebrates as a \"standard\": in Lamarck's theory of evolution, he believed that characteristics acquired through the evolutionary process involved not only survival, but also progression toward a \"higher form\", to which humans and vertebrates were closer than invertebrates were. Although goal-directed evolution has been abandoned, the distinction of invertebrates and vertebrates persists to this day, even though the grouping has been noted to be \"hardly natural or even very sharp.\" Another reason cited for this continued distinction is that Lamarck created a precedent through his classifications which is now difficult to escape from. It is also possible that some humans believe that, they themselves being vertebrates, the group deserves more attention than invertebrates. In any event, in the 1968 edition of Invertebrate Zoology, it is noted that \"division of the Animal Kingdom into vertebrates and invertebrates is artificial and reflects human bias in favor of man's own relatives.\" The book also points out that the group lumps a vast number of species together, so that no one characteristic describes all invertebrates. In addition, some species included are only remotely related to one another, with some more related to vertebrates than other invertebrates (see Paraphyly).",
"title": "History"
},
{
"paragraph_id": 32,
"text": "For many centuries, invertebrates were neglected by biologists, in favor of big vertebrates and \"useful\" or charismatic species. Invertebrate biology was not a major field of study until the work of Linnaeus and Lamarck in the 18th century. During the 20th century, invertebrate zoology became one of the major fields of natural sciences, with prominent discoveries in the fields of medicine, genetics, palaeontology, and ecology. The study of invertebrates has also benefited law enforcement, as arthropods, and especially insects, were discovered to be a source of information for forensic investigators.",
"title": "In research"
},
{
"paragraph_id": 33,
"text": "Two of the most commonly studied model organisms nowadays are invertebrates: the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans. They have long been the most intensively studied model organisms, and were among the first life-forms to be genetically sequenced. This was facilitated by the severely reduced state of their genomes, but many genes, introns, and linkages have been lost. Analysis of the starlet sea anemone genome has emphasised the importance of sponges, placozoans, and choanoflagellates, also being sequenced, in explaining the arrival of 1500 ancestral genes unique to animals. Invertebrates are also used by scientists in the field of aquatic biomonitoring to evaluate the effects of water pollution and climate change.",
"title": "In research"
},
{
"paragraph_id": 34,
"text": "",
"title": "External links"
}
]
| Invertebrates is an umbrella term describing animals that neither develop nor retain a vertebral column, which evolved from the notochord. It is a paraphyletic grouping including all animals excluding the chordate subphylum Vertebrata, i.e. vertebrates. Well-known phyla of invertebrates include arthropods, mollusks, annelids, echinoderms, flatworms, cnidarians and sponges. The majority of animal species are invertebrates; one estimate puts the figure at 97%. Many invertebrate taxa have a greater number and diversity of species than the entire subphylum of Vertebrata. Invertebrates vary widely in size, from 50 μm rotifers to the 9–10 m colossal squid. Some so-called invertebrates, such as the Tunicata and Cephalochordata, are actually sister chordate subphyla to Vertebrata, being more closely related to vertebrates than to other invertebrates. This makes the term "invertebrates" rather polyphyletic, so the term has little meaning in taxonomy. | 2001-11-06T00:57:04Z | 2023-12-26T21:11:19Z | [
"Template:Short description",
"Template:ISBN",
"Template:Dead link",
"Template:Authority control",
"Template:Cite video",
"Template:Paraphyletic group",
"Template:Excerpt",
"Template:Portal",
"Template:Use dmy dates",
"Template:Pp-vandalism",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite web",
"Template:Cbignore"
]
| https://en.wikipedia.org/wiki/Invertebrate |
15,225 | Ivar Aasen | Ivar Andreas Aasen (Norwegian pronunciation: [ˈîːvɑr ˈòːsn̩]; 5 August 1813 – 23 September 1896) was a Norwegian philologist, lexicographer, playwright, and poet. He is best known for having assembled one of the two official written versions of the Norwegian language, Nynorsk, from various dialects.
He was born as Iver Andreas Aasen at Åsen in Ørsta (then Ørsten), in the district of Sunnmøre, on the west coast of Norway. His father, a peasant with a small farm, Ivar Jonsson, died in 1826. The younger Ivar was brought up to farmwork, but he assiduously cultivated all his leisure in reading. An early interest of his was botany. When he was eighteen, he opened an elementary school in his native parish. In 1833 he entered the household of Hans Conrad Thoresen, the husband of the eminent writer Magdalene Thoresen, in Herøy (then Herø), and there he picked up the elements of Latin. Gradually, and by dint of infinite patience and concentration, the young peasant mastered many languages, and began the scientific study of their structure. Ivar single-handedly created a new language for Norway to become the "literary" language.
About 1846 he had freed himself from all the burden of manual labour, and could occupy his thoughts with the dialect of his native district, Sunnmøre; his first publication was a small collection of folk songs in the Sunnmøre dialect (1843). His remarkable abilities now attracted general attention, and he was helped to continue his studies undisturbed. His Grammar of the Norwegian Dialects (Danish: Det Norske Folkesprogs Grammatik, 1848) was the result of much labour, and of journeys taken to every part of the country. Aasen's famous Dictionary of the Norwegian Dialects (Danish: Ordbog over det Norske Folkesprog) appeared in its original form in 1850, and from this publication dates all the wide cultivation of the popular language in Norwegian, since Aasen really did no less than construct, out of the different materials at his disposal, a popular language or definite folke-maal (people's language) for Norway. By 1853, he had created the norm for utilizing his new language, which he called Landsmaal, meaning country language. With certain modifications, the most important of which were introduced later by Aasen himself, but also through a latter policy aiming to merge this Norwegian language with Dano-Norwegian, this language has become Nynorsk ("New Norwegian"), the second of Norway's two official languages (the other being Bokmål, the Dano-Norwegian descendant of the Danish language used in Norway in Aasen's time). An unofficial variety of Norwegian closer to Aasen's language is still found in Høgnorsk ("High Norwegian"). Today, some consider Nynorsk on equal footing with Bokmål, as Bokmål tends to be used more in radio and television and most newspapers, whereas New Norse (Nynorsk) is used equally in government work as well as approximately 17% of schools. Although it is not as common as its brother language, it needs to be looked upon as a viable language, as a large minority of Norwegians use it as their primary language including many scholars and authors. New Norse is both a written and spoken language.
Aasen composed poems and plays in the composite dialect to show how it should be used; one of these dramas, The Heir (1855), was frequently acted, and may be considered as the pioneer of all the abundant dialect-literature of the last half-century of the 1800s, from Vinje to Garborg. In 1856, he published Norske Ordsprog, a treatise on Norwegian proverbs. Aasen continuously enlarged and improved his grammars and his dictionary. He lived very quietly in lodgings in Oslo (then Christiania), surrounded by his books and shrinking from publicity, but his name grew into wide political favour as his ideas about the language of the peasants became more and more the watch-word of the popular party. In 1864, he published his definitive grammar of Nynorsk and in 1873 he published the definitive dictionary.
Quite early in his career, in 1842, he had begun to receive a grant to enable him to give his entire attention to his philological investigations; and the Storting (Norwegian parliament), conscious of the national importance of his work, treated him in this respect with more and more generosity as he advanced in years. He continued his investigations to the last, but it may be said that, after the 1873 edition of his Dictionary (with a new title: Danish: Norsk Ordbog), he added but little to his stores. Ivar Aasen holds perhaps an isolated place in literary history as the one man who has invented, or at least selected and constructed, a language which has pleased so many thousands of his countrymen that they have accepted it for their schools, their sermons and their songs. He died in Christiania on 23 September 1896, and was buried with public honours.
Ivar Aasen-tunet, an institution devoted to the Nynorsk language, opened in June 2000. The building in Ørsta was designed by Norwegian architect Sverre Fehn. Their web page includes most of Aasens' texts, numerous other examples of Nynorsk literature (in Nettbiblioteket, the Internet Library), and some articles, including some in English, about language history in Norway.
Språkåret 2013 (The Language Year 2013) celebrated Ivar Aasen's 200 year anniversary, as well as the 100 year anniversary of Det Norske Teateret. The year's main focus was to celebrate linguistic diversity in Norway. In a poll released in connection with the celebration, 56% of Norwegians said they held positive views of Aasen, while 7% held negative views. On Aasen's 200 anniversary, 5 August 2013, Bergens Tidende, which is normally published mainly in Bokmål, published an edition fully in Nynorsk in memory of Aasen.
Aasen published a wide range of material, some of it released posthumously. | [
{
"paragraph_id": 0,
"text": "Ivar Andreas Aasen (Norwegian pronunciation: [ˈîːvɑr ˈòːsn̩]; 5 August 1813 – 23 September 1896) was a Norwegian philologist, lexicographer, playwright, and poet. He is best known for having assembled one of the two official written versions of the Norwegian language, Nynorsk, from various dialects.",
"title": ""
},
{
"paragraph_id": 1,
"text": "He was born as Iver Andreas Aasen at Åsen in Ørsta (then Ørsten), in the district of Sunnmøre, on the west coast of Norway. His father, a peasant with a small farm, Ivar Jonsson, died in 1826. The younger Ivar was brought up to farmwork, but he assiduously cultivated all his leisure in reading. An early interest of his was botany. When he was eighteen, he opened an elementary school in his native parish. In 1833 he entered the household of Hans Conrad Thoresen, the husband of the eminent writer Magdalene Thoresen, in Herøy (then Herø), and there he picked up the elements of Latin. Gradually, and by dint of infinite patience and concentration, the young peasant mastered many languages, and began the scientific study of their structure. Ivar single-handedly created a new language for Norway to become the \"literary\" language.",
"title": "Background"
},
{
"paragraph_id": 2,
"text": "About 1846 he had freed himself from all the burden of manual labour, and could occupy his thoughts with the dialect of his native district, Sunnmøre; his first publication was a small collection of folk songs in the Sunnmøre dialect (1843). His remarkable abilities now attracted general attention, and he was helped to continue his studies undisturbed. His Grammar of the Norwegian Dialects (Danish: Det Norske Folkesprogs Grammatik, 1848) was the result of much labour, and of journeys taken to every part of the country. Aasen's famous Dictionary of the Norwegian Dialects (Danish: Ordbog over det Norske Folkesprog) appeared in its original form in 1850, and from this publication dates all the wide cultivation of the popular language in Norwegian, since Aasen really did no less than construct, out of the different materials at his disposal, a popular language or definite folke-maal (people's language) for Norway. By 1853, he had created the norm for utilizing his new language, which he called Landsmaal, meaning country language. With certain modifications, the most important of which were introduced later by Aasen himself, but also through a latter policy aiming to merge this Norwegian language with Dano-Norwegian, this language has become Nynorsk (\"New Norwegian\"), the second of Norway's two official languages (the other being Bokmål, the Dano-Norwegian descendant of the Danish language used in Norway in Aasen's time). An unofficial variety of Norwegian closer to Aasen's language is still found in Høgnorsk (\"High Norwegian\"). Today, some consider Nynorsk on equal footing with Bokmål, as Bokmål tends to be used more in radio and television and most newspapers, whereas New Norse (Nynorsk) is used equally in government work as well as approximately 17% of schools. Although it is not as common as its brother language, it needs to be looked upon as a viable language, as a large minority of Norwegians use it as their primary language including many scholars and authors. New Norse is both a written and spoken language.",
"title": "Career"
},
{
"paragraph_id": 3,
"text": "Aasen composed poems and plays in the composite dialect to show how it should be used; one of these dramas, The Heir (1855), was frequently acted, and may be considered as the pioneer of all the abundant dialect-literature of the last half-century of the 1800s, from Vinje to Garborg. In 1856, he published Norske Ordsprog, a treatise on Norwegian proverbs. Aasen continuously enlarged and improved his grammars and his dictionary. He lived very quietly in lodgings in Oslo (then Christiania), surrounded by his books and shrinking from publicity, but his name grew into wide political favour as his ideas about the language of the peasants became more and more the watch-word of the popular party. In 1864, he published his definitive grammar of Nynorsk and in 1873 he published the definitive dictionary.",
"title": "Career"
},
{
"paragraph_id": 4,
"text": "Quite early in his career, in 1842, he had begun to receive a grant to enable him to give his entire attention to his philological investigations; and the Storting (Norwegian parliament), conscious of the national importance of his work, treated him in this respect with more and more generosity as he advanced in years. He continued his investigations to the last, but it may be said that, after the 1873 edition of his Dictionary (with a new title: Danish: Norsk Ordbog), he added but little to his stores. Ivar Aasen holds perhaps an isolated place in literary history as the one man who has invented, or at least selected and constructed, a language which has pleased so many thousands of his countrymen that they have accepted it for their schools, their sermons and their songs. He died in Christiania on 23 September 1896, and was buried with public honours.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "Ivar Aasen-tunet, an institution devoted to the Nynorsk language, opened in June 2000. The building in Ørsta was designed by Norwegian architect Sverre Fehn. Their web page includes most of Aasens' texts, numerous other examples of Nynorsk literature (in Nettbiblioteket, the Internet Library), and some articles, including some in English, about language history in Norway.",
"title": "The Ivar Aasen Centre"
},
{
"paragraph_id": 6,
"text": "Språkåret 2013 (The Language Year 2013) celebrated Ivar Aasen's 200 year anniversary, as well as the 100 year anniversary of Det Norske Teateret. The year's main focus was to celebrate linguistic diversity in Norway. In a poll released in connection with the celebration, 56% of Norwegians said they held positive views of Aasen, while 7% held negative views. On Aasen's 200 anniversary, 5 August 2013, Bergens Tidende, which is normally published mainly in Bokmål, published an edition fully in Nynorsk in memory of Aasen.",
"title": "2013 Language year"
},
{
"paragraph_id": 7,
"text": "Aasen published a wide range of material, some of it released posthumously.",
"title": "Bibliography"
},
{
"paragraph_id": 8,
"text": "",
"title": "External links"
}
]
| Ivar Andreas Aasen was a Norwegian philologist, lexicographer, playwright, and poet. He is best known for having assembled one of the two official written versions of the Norwegian language, Nynorsk, from various dialects. | 2001-11-06T20:36:47Z | 2023-11-05T13:28:59Z | [
"Template:Cite web",
"Template:IPA-no",
"Template:Reflist",
"Template:Cite book",
"Template:EB1911",
"Template:Authority control",
"Template:Short description",
"Template:Lang-da",
"Template:Main",
"Template:Clear left",
"Template:Infobox writer",
"Template:Cite encyclopedia",
"Template:Use dmy dates",
"Template:Harvnb"
]
| https://en.wikipedia.org/wiki/Ivar_Aasen |
15,226 | Irredentism | Irredentism is a desire by one state to annex a territory of another state. This desire can be motivated by ethnic reasons because the population of the territory is ethnically similar to the population of the parent state. Historical reasons may also be responsible, i.e., that the territory previously formed part of the parent state. However, difficulties in applying the concept to concrete cases have given rise to academic debates about its precise definition. Disagreements concern whether either or both ethnic and historical reasons have to be present and whether non-state actors can also engage in irredentism. A further dispute is whether attempts to absorb a full neighboring state are also included. There are various types of irredentism. For typical forms of irredentism, the parent state already exists before the territorial conflict with a neighboring state arises. However, there are also forms of irredentism in which the parent state is newly created by uniting an ethnic group spread across several countries. Another distinction concerns whether the country to which the disputed territory currently belongs is a regular state, a former colony, or a collapsed state.
A central research topic concerning irredentism is the question of how it is to be explained or what causes it. Many explanations hold that ethnic homogeneity within a state makes irredentism more likely. Discrimination against the ethnic group in the neighboring territory is another contributing factor. A closely related explanation argues that national identities based primarily on ethnicity, culture, and history increase irredentist tendencies. Another approach is to explain irredentism as an attempt to increase power and wealth. In this regard, it is argued that irredentist claims are more likely if the neighboring territory is relatively rich. Many explanations also focus on the regime type and hold that democracies are less likely to engage in irredentism while anocracies are particularly open to it.
Irredentism has been an influential force in world politics since the mid-nineteenth century. It has been responsible for many armed conflicts, even though international law is hostile to it and irredentist movements often fail to achieve their goals. The term was originally coined from the Italian phrase Italia irredenta and referred to an Italian movement after 1878 claiming parts of Switzerland and the Austro-Hungarian Empire. Often discussed cases of irredentism include Nazi Germany's annexation of the Sudetenland, Somalia's invasion of Ethiopia, and Argentina's invasion of the Falkland Islands. Later examples are attempts to establish a Greater Serbia following the breakup of Yugoslavia and Russia's annexation of Crimea following the dissolution of the Soviet Union. Irredentism is closely related to revanchism and secession. Revanchism is an attempt to annex territory belonging to another state. It is motivated by the goal of taking revenge for a previous grievance, in contrast to the goal of irredentism of building an ethnically unified nation-state. In the case of secession, a territory breaks away and forms an independent state instead of merging with another state.
The term irredentism was coined from the Italian phrase Italia irredenta (unredeemed Italy). This phrase originally referred to territory in Austria-Hungary that was mostly or partly inhabited by ethnic Italians. In particular, it applies to Trentino and Trieste, but also Gorizia, Istria, Fiume, and Dalmatia during the 19th and early 20th centuries. Irredentist projects often use the term "Greater" to label the desired outcome of their expansion, as in "Greater Serbia" or "Greater Russia".
Irredentism is often understood as the claim that territories belonging to one state should be incorporated into another state because their population is ethnically similar or because it historically belonged to the other state before. Many definitions of irredentism have been proposed to give a more precise formulation. Despite a wide overlap concerning its general features, there is no consensus about its exact characterization. The disagreements matter for evaluating whether irredentism was the cause of war which is difficult in many cases and different definitions often lead to opposite conclusions.
There is wide consensus that irredentism is a form of territorial dispute involving the attempt to annex territories belonging to a neighboring state. However, not all such attempts constitute forms of irredentism and there is no academic consensus on precisely what other features need to be present. This concerns disagreements about who claims the territory, for what reasons they do so, and how much territory is claimed. Most scholars define irredentism as a claim made by one state on the territory of another state. In this regard, there are three essential entities to irredentism: (1) an irredentist state or parent state, (2) a neighboring host state or target state, and (3) the disputed territory belonging to the host state, often referred to as irredenta. According to this definition, popular movements demanding territorial change by non-state actors do not count as irredentist in the strict sense. A different definition characterizes irredentism as the attempt of an ethnic minority to break away and join their "real" motherland even though this minority is a non-state actor.
The reason for engaging in territorial conflict is another issue, with some scholars stating that irredentism is primarily motivated by ethnicity. In this view, the population in the neighboring territory is ethnically similar and the intention is to retrieve the area to unite the people. This definition implies, for example, that the majority of the border disputes in the history of Latin America were not forms of irredentism. Usually, irredentism is defined in terms of the motivation of the irredentist state, even if the territory is annexed against the will of the local population. Other theorists focus more on the historical claim that the disputed territory used to be part of the state's ancestral homeland. This is close to the literal meaning of the original Italian expression "terra irredenta" as unredeemed land. In this view, the ethnicity of the people inhabiting this territory is not important. However, it is also possible to combine both characterizations, i.e. that the motivation is either ethnic or historical or both. Some scholars, like Benjamin Neuberger, include geographical reasons in their definitions.
A further disagreement concerns the amount of area that is to be annexed. Usually, irredentism is restricted to the attempt to incorporate some parts of another state. In this regard, irredentism challenges established borders with the neighboring state but does not challenge the existence of the neighboring state in general. However, some definitions of irredentism also include attempts to absorb the whole neighboring state and not just a part of it. In this sense, claims by both South Korea and North Korea to incorporate the whole of the Korean Peninsula would be considered a form of irredentism.
A popular view combining many of the elements listed above holds that irredentism is based on incongruence between the borders of a state and the boundaries of the corresponding nation. State borders are usually clearly delimited, both physically and on maps. National boundaries, on the other hand, are less tangible since they correspond to a group's perception of its historic, cultural, and ethnic boundaries. Irredentism may manifest if state borders do not correspond to national boundaries. The objective of irredentism is to enlarge a state to establish a congruence between its borders and the boundaries of the corresponding nation.
Various types of irredentism have been proposed. However, not everyone agrees that all the types listed here constitute forms of irredentism and it often depends on what definition is used. According to political theorists Naomi Chazan and Donald L. Horowitz, there are two types of irredentism. The typical case involves one state that intends to annex territories belonging to a neighboring state. Nazi Germany’s claim on the Sudetenland of Czechoslovakia is an example of this form of irredentism.
For the second type, there is no pre-existing parent state. Instead, a cohesive group existing as a minority in multiple countries intends to unify to form a new parent state. The intended creation of a Kurdistan state uniting the Kurds living in Turkey, Syria, Iraq, and Iran is an example of the second type. If such a project is successful for only one segment, the result is secession and not irredentism. This happened, for example, during the breakup of Yugoslavia when Yugoslavian Slovenes formed the new state of Slovenia while the Austrian Slovenes did not join them and remained part of Austria. Not all theorists accept that the second type constitutes a form of irredentism. In this regard, it is often argued that it is too similar to secession to maintain a distinction between the two. For example, political scholar Benyamin Neuberger holds that a pre-existing parent state is necessary for irredentism.
Political scientist Thomas Ambrosio restricts his definition to cases involving a pre-existing parent state and distinguishes three types of irredentism: (1) between two states, (2) between a state and a former colony, and (3) between a state and a collapsed state. The typical case is between two states. A textbook example of this is Somalia's invasion of Ethiopia. In the second case of decolonization, the territory to be annexed is a former colony of another state and not a regular part of it. An example is the Indonesian invasion and occupation of the former Portuguese colony of East Timor. In the case of state collapse, one state disintegrates and a neighboring state absorbs some of its former territories. This was the case for the irredentist movements by Croatia and Serbia during the breakup of Yugoslavia.
Explanations of irredentism try to determine what causes irredentism, how it unfolds, and how it can be peacefully resolved. Various hypotheses have been proposed but there is still very little consensus on how irredentism is to be explained despite its prevalence and its long history of provoking armed conflicts. Some of these proposals can be combined but others conflict with each other and the available evidence may not be sufficient to decide between them. An active research topic in this regard concerns the reasons for irredentism. Many countries have ethnic kin outside their borders. But only a few are willing to engage in violent conflicts to annex foreign territory in an attempt to unite their kin. Research on the causes of irredentism tries to explain why some countries pursue irredentism but others do not. Relevant factors often discussed include ethnicity, nationalism, economic considerations, the desire to increase power, and the type of regime.
A common explanation of irredentism focuses on ethnic arguments. It is based on the observation that irredentist claims are primarily advanced by states with a homogenous ethnic population. This is explained by the idea that, if a state is composed of several ethnic groups, then annexing a territory inhabited primarily by one of those groups would shift the power balance in favor of this group. For this reason, other groups in the state are likely to internally reject the irredentist claims. This inhibiting factor is not present for homogenous states. A similar argument is also offered for the enclave to be annexed: an ethnically heterogenous enclave is less likely to desire to be absorbed by another state for ethnic reasons since this would only benefit one ethnic group. These considerations explain, for example, why irredentism is not very common in Africa since most African states are ethnically heterogeneous. Relevant factors for the ethnic motivation for irredentism are how large the dominant ethnic group is relative to other groups and how large it is in absolute terms. It also matters whether the ethnic group is relatively dispersed or located in a small core area and whether it is politically disadvantaged.
Explanations focusing on nationalism are closely related to ethnicity-based explanations. Nationalism can be defined as the claim that the boundaries of a state should match those of the nation. According to constructivist accounts, for example, the dominant national identity is one of the central factors behind irredentism. In this view, identities based on ethnicity, culture, and history can easily invite tendencies to enlarge national borders. They may justify the goal of integrating ethnically and culturally similar territories. Civic national identities focusing more on a political nature, on the other hand, are more closely tied to pre-existing national boundaries.
Structural accounts use a slightly different approach and focus on the relationship between nationalism and the regional context. They focus on the tension between state sovereignty and national self-determination. State sovereignty is the principle of international law holding that each state has sovereignty over its own territory. It means that states are not allowed to interfere with essentially domestic affairs of other states. National self-determination, on the other hand, concerns the right of people to determine their own international political status. According to the structural explanation, emphasis on national self-determination may legitimize irredentist claims while the principle of state sovereignty defends the status quo of the existing sovereign borders. This position is supported by the observation that irredentist conflicts are much more common during times of international upheavals.
Another factor commonly cited as a force fueling irredentism is discrimination against the main ethnic group in the enclave. Irredentist states often try to legitimize their aggression against neighbors by presenting them as humanitarian interventions aimed at protecting their discriminated ethnic kin. This justification was used, for example, in Armenia's engagement in the Nagorno-Karabakh conflict, in Serbia's involvement in the Croatian War of Independence, and in Russia's annexation of Crimea. Some political theorists, like David S. Siroky and Christopher W. Hale, hold that there is little empirical evidence for arguments based on ethnic homogeneity and discrimination. In this view, they are mainly used as a pretext to hide other goals, such as material gain.
Another relevant factor is the outlook of the population inhabiting the territory to be annexed. The desire of the irredentist state to annex a foreign territory and the desire of that territory to be annexed do not always overlap. In some cases, a minority group does not want to be annexed, as was the case for the Crimean Tatars in Russia's annexation of Crimea. In other cases, a minority group would want to be annexed but the intended parent state is not interested.
Various accounts stress the role of power and economic benefits as reasons for irredentism. Realist explanations focus on the power balance between the irredentist state and the target state: the more this power balance shifts in favor of the irredentist state, the more likely violent conflicts become. A key factor in this regard is also the reaction of the international community, i.e. whether irredentist claims are tolerated or rejected. Irredentism can be used as a tool or pretext to increase the parent state's power. Rational choice theories study how irredentism is caused by decision-making processes of certain groups within a state. In this view, irredentism is a tool used by elites to secure their political interests. They do so by appealing to popular nationalist sentiments. This can be used, for example, to gain public support against political rivals or to divert attention away from domestic problems.
Other explanations focus on economic factors. For example, larger states enjoy advantages that come with having an increased market and decreased per capita cost of defense. However, there are also disadvantages to having a bigger state, such as the challenges that come with accommodating a wider range of citizens' preferences. Based on these lines of thought, it has been argued that states are more likely to advocate irredentist claims if the enclave is a relatively rich territory.
An additional relevant factor is the regime type of both the irredentist state and the neighboring state. In this regard, it is often argued that democratic states are less likely to engage in irredentism. One reason cited is that democracies often are more inclusive of other ethnic groups. Another is that democracies are in general less likely to engage in violent conflicts. This is closely related to democratic peace theory, which claims that democracies try to avoid armed conflicts with other democracies. This is also supported by the observation that most irredentist conflicts are started by authoritarian regimes. However, irredentism constitutes a paradox for democratic systems. The reason is that democratic ideals pertaining to the ethnic group can often be used to justify its claim, which may be interpreted as the expression of a popular will toward unification. But there are also cases of irredentism made primarily by a government that is not broadly supported by the population.
According to Siroky and Hale, anocratic regimes are most likely to engage in irredentist conflicts and to become their victim. This is based on the idea that they share some democratic ideals favoring irredentism but often lack institutional stability and accountability. This makes it more likely for the elites to consolidate their power using ethno-nationalist appeals to the masses.
Irredentism is a widespread phenomenon and has been an influential force in world politics since the mid-nineteenth century. It has been responsible for countless conflicts. There are still many unresolved irredentist disputes today that constitute discords between nations. In this regard, irredentism is a potential source of conflict in many places and often escalates into military confrontations between states. For example, international relation theorist Markus Kornprobst argues that "no other issue over which states fight is as war-prone as irredentism". Political scholar Rachel Walker points out that "there is scarcely a country in the world that is not involved in some sort of irredentist quarrel ... although few would admit to this". Political theorists Stephen M. Saideman and R. William Ayres argue that many of the most important conflicts of the 1990s were caused by irredentism, such as the wars for a Greater Serbia and a Greater Croatia. Irredentism carries a lot of potential for future conflicts since many states have kin groups in adjacent countries. It has been argued that it poses a significant danger to human security and the international order. For these reasons, irredentism has been a central topic in the field of international relations.
For the most part, international law is hostile to irredentism. For example, the United Nations Charter calls for respect for established territorial borders and defends state sovereignty. Similar outlooks are taken by the Organization of African Unity, the Organization of American States, and the Helsinki Final Act. Since irredentist claims are based on conflicting sovereignty assertions, it is often difficult to find a working compromise. Peaceful resolutions of irredentist conflicts often result in mutual recognition of de facto borders rather than territorial change. International relation theorists Martin Griffiths et al. argue that the threat of rising irredentism may be reduced by focusing on political pluralism and respect for minority rights.
Irredentist movements, peaceful or violent, are rarely successful. In many cases, despite aiming to help ethnic minorities, irredentism often has the opposite effect and ends up worsening their living conditions. On the one hand, the state still in control of those territories may decide to further discriminate against them as an attempt to decrease the threat to its national security. On the other hand, the irredentist state may merely claim to care about the ethnic minorities but, in truth, use such claims only as a pretext to increase its territory or to destabilize an opponent.
The emergence of irredentism is tied to the rise of modern nationalism and the idea of a nation-state, which are often linked to the French Revolution. However, some political scholars, like Griffiths et al., argue that phenomena similar to irredentism existed even before. For example, part of the justification for the crusades was to liberate fellow Christians from Muslim rule and to redeem the Holy Land. Nonetheless, most theorists see irredentism as a more recent phenomenon. The term was coined in the 19th century and is linked to border disputes between modern states.
Nazi Germany's annexation of the Sudetenland in 1938 is an often-cited example of irredentism. At the time, the Sudetenland formed part of Czechoslovakia but had a majority German population. Adolf Hitler justified the annexation based on his allegation that Sudeten Germans were being mistreated by the Czechoslovak government. The Sudetenland was yielded to Germany following the Munich Agreement in an attempt to prevent the outbreak of a major war.
Somalia's invasion of Ethiopia in 1977 is frequently discussed as a case of African irredentism. The goal of this attack was to unite the significant Somali population living in the Ogaden region with their kin by annexing this area to create a Greater Somalia. The invasion escalated into a war of attrition that lasted about eight months. Somalia was close to reaching its goal but failed in the end, mainly due to an intervention by socialist countries.
Argentina's invasion of the Falkland Islands in 1982 is cited as an example of irredentism in South America, where the Argentine military government sought to exploit national sentiment over the islands to deflect attention from domestic concerns. President Juan Perón exploited the issue to reduce British influence in Argentina, instituting educational reform teaching the islands were Argentine and creating a strong nationalist sentiment over the issue. The war ended with a victory for the UK after about two months even though many analysts considered the Argentine military position unassailable. Although defeated, Argentina did not officially declare the cessation of hostilities until 1989 and successive Argentine Governments have continued to claim the islands. The islands are now self-governing with the UK responsible for defence and foreign relations. Referenda in 1986 and 2013 show a preference for British sovereignty among the population. Both the UK and Spain claimed sovereignty in the 18th Century and Argentina claims the islands as a colonial legacy from independence in 1816.
The breakup of Yugoslavia in the early 1990s resulted in various irredentist projects. They include Slobodan Milošević's attempts to establish a Greater Serbia by absorbing some regions of neighboring states that were part of former Yugoslavia. A simultaneous similar project aimed at the establishment of a Greater Croatia.
Russia's annexation of Crimea in 2014 is a more recent example of irredentism. Beginning in the 15th century CE, the Crimean peninsula was a Tartar Khanate. However, in 1783 the Russian Empire broke a previous treaty and annexed Crimea. In 1954, when both Russia and Ukraine were part of the Soviet Union, it was transferred from Russia to Ukraine. More than fifty years later, Russia alleged that the Ukrainian government did not uphold the rights of ethnic Russians inhabiting Crimea, using this as a justification for the annexation in March 2014. However, it has been claimed that this was only a pretext to increase its territory and power. Ultimately, Russia invaded the mainland territory of Ukraine in February 2022, thereby escalating the war that continues to the present day.
Ethnicity plays a central role in irredentism since most irredentist states justify their expansionist agenda based on shared ethnicity. In this regard, the goal of unifying parts of an ethnic group in a common nation-state is used as a justification for annexing foreign territories and going to war if the neighboring state resists. Ethnicity is a grouping of people according to a set of shared attributes and similarities. It divides people into groups based on attributes like physical features, customs, tradition, historical background, language, culture, religion, and values. Not all these factors are equally relevant for every ethnic group. For some groups, one factor may predominate, as in ethno-linguistic, ethno-racial, and ethno-religious identities. In most cases, ethnic identities are based on a set of common features.
A central aspect of many ethnic identities is that all members share a common homeland or place of origin. This place of origin does not have to correspond to the area where the majority of the ethnic group currently lives in case they migrated from their homeland. Another feature is a common language or dialect. In many cases, religion also forms a vital aspect of ethnicity. Shared culture is another significant factor. It is a wide term and can include characteristic social institutions, diet, dress, and other practices. It is often difficult to draw clear boundaries between people based on their ethnicity. For this reason, some definitions focus less on actual objective features and stress instead that what unites an ethnic group is a subjective belief that such common features exist. In this view, the common belief matters more than the extent to which those shared features actually exist. Examples of large ethnic groups are the Han Chinese, the Arabs, the Bengalis, the Punjabis, and the Turks.
Some theorists, like sociologist John Milton Yinger, use terms like ethnic group or ethnicity as near-synonyms for nation. Nations are usually based on ethnicity but what sets them apart from ethnicity is their political form as a state or a state-like entity. The physical and visible aspects of ethnicity, such as skin color and facial features, are often referred to as race, which may thus be understood as a subset of ethnicity. However, some theorists, like sociologist Pierre van den Berghe, contrast the two by restricting ethnicity to cultural traits and race to physical traits.
Ethnic solidarity can provide a sense of belonging as well as physical and mental security. It can help people identify with a common purpose. However, ethnicity has also been the source of many conflicts. It has been responsible for various forms of mass violence, including ethnic cleansing and genocide. The perpetrators usually form part of the ruling majority and target ethnic minority groups. Not all ethnic-based conflicts involve mass violence, like many forms of ethnic discrimination.
Irredentism is often seen as a product of modern nationalism, i.e. the claim that a nation should have its own sovereign state. In this regard, irredentism emerged with and depends on the modern idea of nation-states. The start of modern nationalism is often associated with the French Revolution in 1789. This spawned various nationalist revolutions in Europe around the mid-nineteenth century. They often resulted in a replacement of dynastic imperial governments. A central aspect of nationalism is that it sees states as entities with clearly delimited borders that should correspond to national boundaries. Irredentism reflects the importance people ascribe to these borders and how exactly they are drawn. One difficulty in this regard is that the exact boundaries are often difficult to justify and are therefore challenged in favor of alternatives. Irredentism manifests some of the most aggressive aspects of modern nationalism. It can be seen as a side effect of nationalism paired with the importance it ascribes to borders and the difficulties in agreeing on them.
Irredentism is closely related to secession. Secession can be defined as "an attempt by an ethnic group claiming a homeland to withdraw with its territory from the authority of a larger state of which it is a part." Irredentism, by contrast, is initiated by members of an ethnic group in one state to incorporate territories across their border housing ethnically kindred people. Secession happens when a part of an existing state breaks away to form an independent entity. This was the case, for example, in the United States, when many of the slaveholding southern states decided to secede from the Union to form the Confederate States of America in 1861.
In the case of irredentism, the break-away area does not become independent but merges into another entity. Irredentism is often seen as a government decision, unlike secession. Both movements are influential phenomena in contemporary politics but, as Horowitz argues, secession movements are much more frequent in postcolonial states. However, he also holds that secession movements are less likely to succeed since they usually have very few military resources compared to irredentist states. For this reason, they normally need prolonged external assistance, often from another state. However, such state policies are subject to change. For example, the Indian government supported the Sri Lankan Tamil secessionists up to 1987 but then reach an agreement with the Sri Lankan government and helped suppress the movement.
Horowitz holds that it is important to distinguish secessionist and irredentist movements since they differ significantly concerning their motivation, context, and goals. Despite these differences, irredentism and secessionism are closely related nonetheless. In some cases, the two tendencies may exist side by side. It is also possible that the advocates of one movement change their outlook and promote the other. Whether a movement favors irredentism or secessionism is determined, among other things, by the prospects of forming an independent state in contrast to joining another state. A further factor is whether the irredentist state is likely to espouse a similar ideology to the one found in the territory intending to break away. The anticipated reaction of the international community is an additional factor, i.e. whether it would embrace, tolerate, or reject the detachment or the absorption by another state.
Irredentism and revanchism are two closely related phenomena because both of them involve the attempt to annex territory which belongs to another state. They differ concerning the motivation fuelling this attempt. Irredentism has a positive goal of building a "greater" state that fulfills the ideals of a nation-state. It aims to unify people claimed to belong together because of their shared national identity based on ethnic, cultural, and historical aspects.
For revanchism, on the other hand, the goal is more negative because it focuses on taking revenge for some form of grievance or injustice suffered earlier. In this regard, it is motivated by resentment and aims to reverse territorial losses due to a previous defeat. In an attempt to contrast irredentism with revanchism, political scientist Anna M. Wittmann argues that Germany's annexation of the Sudetenland in 1938 constitutes a form of irredentism because of its emphasis on a shared language and ethnicity. But she characterizes Germany's invasion of Poland the following year as a form of revanchism due to its justification as a revenge intended to reverse previous territorial losses. The term "revanchism" comes from the French term revanche, meaning revenge. It was originally used in the aftermath of the Franco-Prussian War for nationalists intending to reclaim the lost territory of Alsace-Lorraine. Saddam Hussein justified the Iraqi invasion of Kuwait in 1990 by claiming that Kuwait had always been an integral part of Iraq and only became an independent nation due to the interference of the British Empire. | [
{
"paragraph_id": 0,
"text": "Irredentism is a desire by one state to annex a territory of another state. This desire can be motivated by ethnic reasons because the population of the territory is ethnically similar to the population of the parent state. Historical reasons may also be responsible, i.e., that the territory previously formed part of the parent state. However, difficulties in applying the concept to concrete cases have given rise to academic debates about its precise definition. Disagreements concern whether either or both ethnic and historical reasons have to be present and whether non-state actors can also engage in irredentism. A further dispute is whether attempts to absorb a full neighboring state are also included. There are various types of irredentism. For typical forms of irredentism, the parent state already exists before the territorial conflict with a neighboring state arises. However, there are also forms of irredentism in which the parent state is newly created by uniting an ethnic group spread across several countries. Another distinction concerns whether the country to which the disputed territory currently belongs is a regular state, a former colony, or a collapsed state.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A central research topic concerning irredentism is the question of how it is to be explained or what causes it. Many explanations hold that ethnic homogeneity within a state makes irredentism more likely. Discrimination against the ethnic group in the neighboring territory is another contributing factor. A closely related explanation argues that national identities based primarily on ethnicity, culture, and history increase irredentist tendencies. Another approach is to explain irredentism as an attempt to increase power and wealth. In this regard, it is argued that irredentist claims are more likely if the neighboring territory is relatively rich. Many explanations also focus on the regime type and hold that democracies are less likely to engage in irredentism while anocracies are particularly open to it.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Irredentism has been an influential force in world politics since the mid-nineteenth century. It has been responsible for many armed conflicts, even though international law is hostile to it and irredentist movements often fail to achieve their goals. The term was originally coined from the Italian phrase Italia irredenta and referred to an Italian movement after 1878 claiming parts of Switzerland and the Austro-Hungarian Empire. Often discussed cases of irredentism include Nazi Germany's annexation of the Sudetenland, Somalia's invasion of Ethiopia, and Argentina's invasion of the Falkland Islands. Later examples are attempts to establish a Greater Serbia following the breakup of Yugoslavia and Russia's annexation of Crimea following the dissolution of the Soviet Union. Irredentism is closely related to revanchism and secession. Revanchism is an attempt to annex territory belonging to another state. It is motivated by the goal of taking revenge for a previous grievance, in contrast to the goal of irredentism of building an ethnically unified nation-state. In the case of secession, a territory breaks away and forms an independent state instead of merging with another state.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The term irredentism was coined from the Italian phrase Italia irredenta (unredeemed Italy). This phrase originally referred to territory in Austria-Hungary that was mostly or partly inhabited by ethnic Italians. In particular, it applies to Trentino and Trieste, but also Gorizia, Istria, Fiume, and Dalmatia during the 19th and early 20th centuries. Irredentist projects often use the term \"Greater\" to label the desired outcome of their expansion, as in \"Greater Serbia\" or \"Greater Russia\".",
"title": "Definition and etymology"
},
{
"paragraph_id": 4,
"text": "Irredentism is often understood as the claim that territories belonging to one state should be incorporated into another state because their population is ethnically similar or because it historically belonged to the other state before. Many definitions of irredentism have been proposed to give a more precise formulation. Despite a wide overlap concerning its general features, there is no consensus about its exact characterization. The disagreements matter for evaluating whether irredentism was the cause of war which is difficult in many cases and different definitions often lead to opposite conclusions.",
"title": "Definition and etymology"
},
{
"paragraph_id": 5,
"text": "There is wide consensus that irredentism is a form of territorial dispute involving the attempt to annex territories belonging to a neighboring state. However, not all such attempts constitute forms of irredentism and there is no academic consensus on precisely what other features need to be present. This concerns disagreements about who claims the territory, for what reasons they do so, and how much territory is claimed. Most scholars define irredentism as a claim made by one state on the territory of another state. In this regard, there are three essential entities to irredentism: (1) an irredentist state or parent state, (2) a neighboring host state or target state, and (3) the disputed territory belonging to the host state, often referred to as irredenta. According to this definition, popular movements demanding territorial change by non-state actors do not count as irredentist in the strict sense. A different definition characterizes irredentism as the attempt of an ethnic minority to break away and join their \"real\" motherland even though this minority is a non-state actor.",
"title": "Definition and etymology"
},
{
"paragraph_id": 6,
"text": "The reason for engaging in territorial conflict is another issue, with some scholars stating that irredentism is primarily motivated by ethnicity. In this view, the population in the neighboring territory is ethnically similar and the intention is to retrieve the area to unite the people. This definition implies, for example, that the majority of the border disputes in the history of Latin America were not forms of irredentism. Usually, irredentism is defined in terms of the motivation of the irredentist state, even if the territory is annexed against the will of the local population. Other theorists focus more on the historical claim that the disputed territory used to be part of the state's ancestral homeland. This is close to the literal meaning of the original Italian expression \"terra irredenta\" as unredeemed land. In this view, the ethnicity of the people inhabiting this territory is not important. However, it is also possible to combine both characterizations, i.e. that the motivation is either ethnic or historical or both. Some scholars, like Benjamin Neuberger, include geographical reasons in their definitions.",
"title": "Definition and etymology"
},
{
"paragraph_id": 7,
"text": "A further disagreement concerns the amount of area that is to be annexed. Usually, irredentism is restricted to the attempt to incorporate some parts of another state. In this regard, irredentism challenges established borders with the neighboring state but does not challenge the existence of the neighboring state in general. However, some definitions of irredentism also include attempts to absorb the whole neighboring state and not just a part of it. In this sense, claims by both South Korea and North Korea to incorporate the whole of the Korean Peninsula would be considered a form of irredentism.",
"title": "Definition and etymology"
},
{
"paragraph_id": 8,
"text": "A popular view combining many of the elements listed above holds that irredentism is based on incongruence between the borders of a state and the boundaries of the corresponding nation. State borders are usually clearly delimited, both physically and on maps. National boundaries, on the other hand, are less tangible since they correspond to a group's perception of its historic, cultural, and ethnic boundaries. Irredentism may manifest if state borders do not correspond to national boundaries. The objective of irredentism is to enlarge a state to establish a congruence between its borders and the boundaries of the corresponding nation.",
"title": "Definition and etymology"
},
{
"paragraph_id": 9,
"text": "Various types of irredentism have been proposed. However, not everyone agrees that all the types listed here constitute forms of irredentism and it often depends on what definition is used. According to political theorists Naomi Chazan and Donald L. Horowitz, there are two types of irredentism. The typical case involves one state that intends to annex territories belonging to a neighboring state. Nazi Germany’s claim on the Sudetenland of Czechoslovakia is an example of this form of irredentism.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "For the second type, there is no pre-existing parent state. Instead, a cohesive group existing as a minority in multiple countries intends to unify to form a new parent state. The intended creation of a Kurdistan state uniting the Kurds living in Turkey, Syria, Iraq, and Iran is an example of the second type. If such a project is successful for only one segment, the result is secession and not irredentism. This happened, for example, during the breakup of Yugoslavia when Yugoslavian Slovenes formed the new state of Slovenia while the Austrian Slovenes did not join them and remained part of Austria. Not all theorists accept that the second type constitutes a form of irredentism. In this regard, it is often argued that it is too similar to secession to maintain a distinction between the two. For example, political scholar Benyamin Neuberger holds that a pre-existing parent state is necessary for irredentism.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "Political scientist Thomas Ambrosio restricts his definition to cases involving a pre-existing parent state and distinguishes three types of irredentism: (1) between two states, (2) between a state and a former colony, and (3) between a state and a collapsed state. The typical case is between two states. A textbook example of this is Somalia's invasion of Ethiopia. In the second case of decolonization, the territory to be annexed is a former colony of another state and not a regular part of it. An example is the Indonesian invasion and occupation of the former Portuguese colony of East Timor. In the case of state collapse, one state disintegrates and a neighboring state absorbs some of its former territories. This was the case for the irredentist movements by Croatia and Serbia during the breakup of Yugoslavia.",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "Explanations of irredentism try to determine what causes irredentism, how it unfolds, and how it can be peacefully resolved. Various hypotheses have been proposed but there is still very little consensus on how irredentism is to be explained despite its prevalence and its long history of provoking armed conflicts. Some of these proposals can be combined but others conflict with each other and the available evidence may not be sufficient to decide between them. An active research topic in this regard concerns the reasons for irredentism. Many countries have ethnic kin outside their borders. But only a few are willing to engage in violent conflicts to annex foreign territory in an attempt to unite their kin. Research on the causes of irredentism tries to explain why some countries pursue irredentism but others do not. Relevant factors often discussed include ethnicity, nationalism, economic considerations, the desire to increase power, and the type of regime.",
"title": "Explanations"
},
{
"paragraph_id": 13,
"text": "A common explanation of irredentism focuses on ethnic arguments. It is based on the observation that irredentist claims are primarily advanced by states with a homogenous ethnic population. This is explained by the idea that, if a state is composed of several ethnic groups, then annexing a territory inhabited primarily by one of those groups would shift the power balance in favor of this group. For this reason, other groups in the state are likely to internally reject the irredentist claims. This inhibiting factor is not present for homogenous states. A similar argument is also offered for the enclave to be annexed: an ethnically heterogenous enclave is less likely to desire to be absorbed by another state for ethnic reasons since this would only benefit one ethnic group. These considerations explain, for example, why irredentism is not very common in Africa since most African states are ethnically heterogeneous. Relevant factors for the ethnic motivation for irredentism are how large the dominant ethnic group is relative to other groups and how large it is in absolute terms. It also matters whether the ethnic group is relatively dispersed or located in a small core area and whether it is politically disadvantaged.",
"title": "Explanations"
},
{
"paragraph_id": 14,
"text": "Explanations focusing on nationalism are closely related to ethnicity-based explanations. Nationalism can be defined as the claim that the boundaries of a state should match those of the nation. According to constructivist accounts, for example, the dominant national identity is one of the central factors behind irredentism. In this view, identities based on ethnicity, culture, and history can easily invite tendencies to enlarge national borders. They may justify the goal of integrating ethnically and culturally similar territories. Civic national identities focusing more on a political nature, on the other hand, are more closely tied to pre-existing national boundaries.",
"title": "Explanations"
},
{
"paragraph_id": 15,
"text": "Structural accounts use a slightly different approach and focus on the relationship between nationalism and the regional context. They focus on the tension between state sovereignty and national self-determination. State sovereignty is the principle of international law holding that each state has sovereignty over its own territory. It means that states are not allowed to interfere with essentially domestic affairs of other states. National self-determination, on the other hand, concerns the right of people to determine their own international political status. According to the structural explanation, emphasis on national self-determination may legitimize irredentist claims while the principle of state sovereignty defends the status quo of the existing sovereign borders. This position is supported by the observation that irredentist conflicts are much more common during times of international upheavals.",
"title": "Explanations"
},
{
"paragraph_id": 16,
"text": "Another factor commonly cited as a force fueling irredentism is discrimination against the main ethnic group in the enclave. Irredentist states often try to legitimize their aggression against neighbors by presenting them as humanitarian interventions aimed at protecting their discriminated ethnic kin. This justification was used, for example, in Armenia's engagement in the Nagorno-Karabakh conflict, in Serbia's involvement in the Croatian War of Independence, and in Russia's annexation of Crimea. Some political theorists, like David S. Siroky and Christopher W. Hale, hold that there is little empirical evidence for arguments based on ethnic homogeneity and discrimination. In this view, they are mainly used as a pretext to hide other goals, such as material gain.",
"title": "Explanations"
},
{
"paragraph_id": 17,
"text": "Another relevant factor is the outlook of the population inhabiting the territory to be annexed. The desire of the irredentist state to annex a foreign territory and the desire of that territory to be annexed do not always overlap. In some cases, a minority group does not want to be annexed, as was the case for the Crimean Tatars in Russia's annexation of Crimea. In other cases, a minority group would want to be annexed but the intended parent state is not interested.",
"title": "Explanations"
},
{
"paragraph_id": 18,
"text": "Various accounts stress the role of power and economic benefits as reasons for irredentism. Realist explanations focus on the power balance between the irredentist state and the target state: the more this power balance shifts in favor of the irredentist state, the more likely violent conflicts become. A key factor in this regard is also the reaction of the international community, i.e. whether irredentist claims are tolerated or rejected. Irredentism can be used as a tool or pretext to increase the parent state's power. Rational choice theories study how irredentism is caused by decision-making processes of certain groups within a state. In this view, irredentism is a tool used by elites to secure their political interests. They do so by appealing to popular nationalist sentiments. This can be used, for example, to gain public support against political rivals or to divert attention away from domestic problems.",
"title": "Explanations"
},
{
"paragraph_id": 19,
"text": "Other explanations focus on economic factors. For example, larger states enjoy advantages that come with having an increased market and decreased per capita cost of defense. However, there are also disadvantages to having a bigger state, such as the challenges that come with accommodating a wider range of citizens' preferences. Based on these lines of thought, it has been argued that states are more likely to advocate irredentist claims if the enclave is a relatively rich territory.",
"title": "Explanations"
},
{
"paragraph_id": 20,
"text": "An additional relevant factor is the regime type of both the irredentist state and the neighboring state. In this regard, it is often argued that democratic states are less likely to engage in irredentism. One reason cited is that democracies often are more inclusive of other ethnic groups. Another is that democracies are in general less likely to engage in violent conflicts. This is closely related to democratic peace theory, which claims that democracies try to avoid armed conflicts with other democracies. This is also supported by the observation that most irredentist conflicts are started by authoritarian regimes. However, irredentism constitutes a paradox for democratic systems. The reason is that democratic ideals pertaining to the ethnic group can often be used to justify its claim, which may be interpreted as the expression of a popular will toward unification. But there are also cases of irredentism made primarily by a government that is not broadly supported by the population.",
"title": "Explanations"
},
{
"paragraph_id": 21,
"text": "According to Siroky and Hale, anocratic regimes are most likely to engage in irredentist conflicts and to become their victim. This is based on the idea that they share some democratic ideals favoring irredentism but often lack institutional stability and accountability. This makes it more likely for the elites to consolidate their power using ethno-nationalist appeals to the masses.",
"title": "Explanations"
},
{
"paragraph_id": 22,
"text": "Irredentism is a widespread phenomenon and has been an influential force in world politics since the mid-nineteenth century. It has been responsible for countless conflicts. There are still many unresolved irredentist disputes today that constitute discords between nations. In this regard, irredentism is a potential source of conflict in many places and often escalates into military confrontations between states. For example, international relation theorist Markus Kornprobst argues that \"no other issue over which states fight is as war-prone as irredentism\". Political scholar Rachel Walker points out that \"there is scarcely a country in the world that is not involved in some sort of irredentist quarrel ... although few would admit to this\". Political theorists Stephen M. Saideman and R. William Ayres argue that many of the most important conflicts of the 1990s were caused by irredentism, such as the wars for a Greater Serbia and a Greater Croatia. Irredentism carries a lot of potential for future conflicts since many states have kin groups in adjacent countries. It has been argued that it poses a significant danger to human security and the international order. For these reasons, irredentism has been a central topic in the field of international relations.",
"title": "Importance, reactions, and consequences"
},
{
"paragraph_id": 23,
"text": "For the most part, international law is hostile to irredentism. For example, the United Nations Charter calls for respect for established territorial borders and defends state sovereignty. Similar outlooks are taken by the Organization of African Unity, the Organization of American States, and the Helsinki Final Act. Since irredentist claims are based on conflicting sovereignty assertions, it is often difficult to find a working compromise. Peaceful resolutions of irredentist conflicts often result in mutual recognition of de facto borders rather than territorial change. International relation theorists Martin Griffiths et al. argue that the threat of rising irredentism may be reduced by focusing on political pluralism and respect for minority rights.",
"title": "Importance, reactions, and consequences"
},
{
"paragraph_id": 24,
"text": "Irredentist movements, peaceful or violent, are rarely successful. In many cases, despite aiming to help ethnic minorities, irredentism often has the opposite effect and ends up worsening their living conditions. On the one hand, the state still in control of those territories may decide to further discriminate against them as an attempt to decrease the threat to its national security. On the other hand, the irredentist state may merely claim to care about the ethnic minorities but, in truth, use such claims only as a pretext to increase its territory or to destabilize an opponent.",
"title": "Importance, reactions, and consequences"
},
{
"paragraph_id": 25,
"text": "The emergence of irredentism is tied to the rise of modern nationalism and the idea of a nation-state, which are often linked to the French Revolution. However, some political scholars, like Griffiths et al., argue that phenomena similar to irredentism existed even before. For example, part of the justification for the crusades was to liberate fellow Christians from Muslim rule and to redeem the Holy Land. Nonetheless, most theorists see irredentism as a more recent phenomenon. The term was coined in the 19th century and is linked to border disputes between modern states.",
"title": "Often-discussed historical examples"
},
{
"paragraph_id": 26,
"text": "Nazi Germany's annexation of the Sudetenland in 1938 is an often-cited example of irredentism. At the time, the Sudetenland formed part of Czechoslovakia but had a majority German population. Adolf Hitler justified the annexation based on his allegation that Sudeten Germans were being mistreated by the Czechoslovak government. The Sudetenland was yielded to Germany following the Munich Agreement in an attempt to prevent the outbreak of a major war.",
"title": "Often-discussed historical examples"
},
{
"paragraph_id": 27,
"text": "Somalia's invasion of Ethiopia in 1977 is frequently discussed as a case of African irredentism. The goal of this attack was to unite the significant Somali population living in the Ogaden region with their kin by annexing this area to create a Greater Somalia. The invasion escalated into a war of attrition that lasted about eight months. Somalia was close to reaching its goal but failed in the end, mainly due to an intervention by socialist countries.",
"title": "Often-discussed historical examples"
},
{
"paragraph_id": 28,
"text": "Argentina's invasion of the Falkland Islands in 1982 is cited as an example of irredentism in South America, where the Argentine military government sought to exploit national sentiment over the islands to deflect attention from domestic concerns. President Juan Perón exploited the issue to reduce British influence in Argentina, instituting educational reform teaching the islands were Argentine and creating a strong nationalist sentiment over the issue. The war ended with a victory for the UK after about two months even though many analysts considered the Argentine military position unassailable. Although defeated, Argentina did not officially declare the cessation of hostilities until 1989 and successive Argentine Governments have continued to claim the islands. The islands are now self-governing with the UK responsible for defence and foreign relations. Referenda in 1986 and 2013 show a preference for British sovereignty among the population. Both the UK and Spain claimed sovereignty in the 18th Century and Argentina claims the islands as a colonial legacy from independence in 1816.",
"title": "Often-discussed historical examples"
},
{
"paragraph_id": 29,
"text": "The breakup of Yugoslavia in the early 1990s resulted in various irredentist projects. They include Slobodan Milošević's attempts to establish a Greater Serbia by absorbing some regions of neighboring states that were part of former Yugoslavia. A simultaneous similar project aimed at the establishment of a Greater Croatia.",
"title": "Often-discussed historical examples"
},
{
"paragraph_id": 30,
"text": "Russia's annexation of Crimea in 2014 is a more recent example of irredentism. Beginning in the 15th century CE, the Crimean peninsula was a Tartar Khanate. However, in 1783 the Russian Empire broke a previous treaty and annexed Crimea. In 1954, when both Russia and Ukraine were part of the Soviet Union, it was transferred from Russia to Ukraine. More than fifty years later, Russia alleged that the Ukrainian government did not uphold the rights of ethnic Russians inhabiting Crimea, using this as a justification for the annexation in March 2014. However, it has been claimed that this was only a pretext to increase its territory and power. Ultimately, Russia invaded the mainland territory of Ukraine in February 2022, thereby escalating the war that continues to the present day.",
"title": "Often-discussed historical examples"
},
{
"paragraph_id": 31,
"text": "Ethnicity plays a central role in irredentism since most irredentist states justify their expansionist agenda based on shared ethnicity. In this regard, the goal of unifying parts of an ethnic group in a common nation-state is used as a justification for annexing foreign territories and going to war if the neighboring state resists. Ethnicity is a grouping of people according to a set of shared attributes and similarities. It divides people into groups based on attributes like physical features, customs, tradition, historical background, language, culture, religion, and values. Not all these factors are equally relevant for every ethnic group. For some groups, one factor may predominate, as in ethno-linguistic, ethno-racial, and ethno-religious identities. In most cases, ethnic identities are based on a set of common features.",
"title": "Related concepts"
},
{
"paragraph_id": 32,
"text": "A central aspect of many ethnic identities is that all members share a common homeland or place of origin. This place of origin does not have to correspond to the area where the majority of the ethnic group currently lives in case they migrated from their homeland. Another feature is a common language or dialect. In many cases, religion also forms a vital aspect of ethnicity. Shared culture is another significant factor. It is a wide term and can include characteristic social institutions, diet, dress, and other practices. It is often difficult to draw clear boundaries between people based on their ethnicity. For this reason, some definitions focus less on actual objective features and stress instead that what unites an ethnic group is a subjective belief that such common features exist. In this view, the common belief matters more than the extent to which those shared features actually exist. Examples of large ethnic groups are the Han Chinese, the Arabs, the Bengalis, the Punjabis, and the Turks.",
"title": "Related concepts"
},
{
"paragraph_id": 33,
"text": "Some theorists, like sociologist John Milton Yinger, use terms like ethnic group or ethnicity as near-synonyms for nation. Nations are usually based on ethnicity but what sets them apart from ethnicity is their political form as a state or a state-like entity. The physical and visible aspects of ethnicity, such as skin color and facial features, are often referred to as race, which may thus be understood as a subset of ethnicity. However, some theorists, like sociologist Pierre van den Berghe, contrast the two by restricting ethnicity to cultural traits and race to physical traits.",
"title": "Related concepts"
},
{
"paragraph_id": 34,
"text": "Ethnic solidarity can provide a sense of belonging as well as physical and mental security. It can help people identify with a common purpose. However, ethnicity has also been the source of many conflicts. It has been responsible for various forms of mass violence, including ethnic cleansing and genocide. The perpetrators usually form part of the ruling majority and target ethnic minority groups. Not all ethnic-based conflicts involve mass violence, like many forms of ethnic discrimination.",
"title": "Related concepts"
},
{
"paragraph_id": 35,
"text": "Irredentism is often seen as a product of modern nationalism, i.e. the claim that a nation should have its own sovereign state. In this regard, irredentism emerged with and depends on the modern idea of nation-states. The start of modern nationalism is often associated with the French Revolution in 1789. This spawned various nationalist revolutions in Europe around the mid-nineteenth century. They often resulted in a replacement of dynastic imperial governments. A central aspect of nationalism is that it sees states as entities with clearly delimited borders that should correspond to national boundaries. Irredentism reflects the importance people ascribe to these borders and how exactly they are drawn. One difficulty in this regard is that the exact boundaries are often difficult to justify and are therefore challenged in favor of alternatives. Irredentism manifests some of the most aggressive aspects of modern nationalism. It can be seen as a side effect of nationalism paired with the importance it ascribes to borders and the difficulties in agreeing on them.",
"title": "Related concepts"
},
{
"paragraph_id": 36,
"text": "Irredentism is closely related to secession. Secession can be defined as \"an attempt by an ethnic group claiming a homeland to withdraw with its territory from the authority of a larger state of which it is a part.\" Irredentism, by contrast, is initiated by members of an ethnic group in one state to incorporate territories across their border housing ethnically kindred people. Secession happens when a part of an existing state breaks away to form an independent entity. This was the case, for example, in the United States, when many of the slaveholding southern states decided to secede from the Union to form the Confederate States of America in 1861.",
"title": "Related concepts"
},
{
"paragraph_id": 37,
"text": "In the case of irredentism, the break-away area does not become independent but merges into another entity. Irredentism is often seen as a government decision, unlike secession. Both movements are influential phenomena in contemporary politics but, as Horowitz argues, secession movements are much more frequent in postcolonial states. However, he also holds that secession movements are less likely to succeed since they usually have very few military resources compared to irredentist states. For this reason, they normally need prolonged external assistance, often from another state. However, such state policies are subject to change. For example, the Indian government supported the Sri Lankan Tamil secessionists up to 1987 but then reach an agreement with the Sri Lankan government and helped suppress the movement.",
"title": "Related concepts"
},
{
"paragraph_id": 38,
"text": "Horowitz holds that it is important to distinguish secessionist and irredentist movements since they differ significantly concerning their motivation, context, and goals. Despite these differences, irredentism and secessionism are closely related nonetheless. In some cases, the two tendencies may exist side by side. It is also possible that the advocates of one movement change their outlook and promote the other. Whether a movement favors irredentism or secessionism is determined, among other things, by the prospects of forming an independent state in contrast to joining another state. A further factor is whether the irredentist state is likely to espouse a similar ideology to the one found in the territory intending to break away. The anticipated reaction of the international community is an additional factor, i.e. whether it would embrace, tolerate, or reject the detachment or the absorption by another state.",
"title": "Related concepts"
},
{
"paragraph_id": 39,
"text": "Irredentism and revanchism are two closely related phenomena because both of them involve the attempt to annex territory which belongs to another state. They differ concerning the motivation fuelling this attempt. Irredentism has a positive goal of building a \"greater\" state that fulfills the ideals of a nation-state. It aims to unify people claimed to belong together because of their shared national identity based on ethnic, cultural, and historical aspects.",
"title": "Related concepts"
},
{
"paragraph_id": 40,
"text": "For revanchism, on the other hand, the goal is more negative because it focuses on taking revenge for some form of grievance or injustice suffered earlier. In this regard, it is motivated by resentment and aims to reverse territorial losses due to a previous defeat. In an attempt to contrast irredentism with revanchism, political scientist Anna M. Wittmann argues that Germany's annexation of the Sudetenland in 1938 constitutes a form of irredentism because of its emphasis on a shared language and ethnicity. But she characterizes Germany's invasion of Poland the following year as a form of revanchism due to its justification as a revenge intended to reverse previous territorial losses. The term \"revanchism\" comes from the French term revanche, meaning revenge. It was originally used in the aftermath of the Franco-Prussian War for nationalists intending to reclaim the lost territory of Alsace-Lorraine. Saddam Hussein justified the Iraqi invasion of Kuwait in 1990 by claiming that Kuwait had always been an integral part of Iraq and only became an independent nation due to the interference of the British Empire.",
"title": "Related concepts"
}
]
| Irredentism is a desire by one state to annex a territory of another state. This desire can be motivated by ethnic reasons because the population of the territory is ethnically similar to the population of the parent state. Historical reasons may also be responsible, i.e., that the territory previously formed part of the parent state. However, difficulties in applying the concept to concrete cases have given rise to academic debates about its precise definition. Disagreements concern whether either or both ethnic and historical reasons have to be present and whether non-state actors can also engage in irredentism. A further dispute is whether attempts to absorb a full neighboring state are also included. There are various types of irredentism. For typical forms of irredentism, the parent state already exists before the territorial conflict with a neighboring state arises. However, there are also forms of irredentism in which the parent state is newly created by uniting an ethnic group spread across several countries. Another distinction concerns whether the country to which the disputed territory currently belongs is a regular state, a former colony, or a collapsed state. A central research topic concerning irredentism is the question of how it is to be explained or what causes it. Many explanations hold that ethnic homogeneity within a state makes irredentism more likely. Discrimination against the ethnic group in the neighboring territory is another contributing factor. A closely related explanation argues that national identities based primarily on ethnicity, culture, and history increase irredentist tendencies. Another approach is to explain irredentism as an attempt to increase power and wealth. In this regard, it is argued that irredentist claims are more likely if the neighboring territory is relatively rich. Many explanations also focus on the regime type and hold that democracies are less likely to engage in irredentism while anocracies are particularly open to it. Irredentism has been an influential force in world politics since the mid-nineteenth century. It has been responsible for many armed conflicts, even though international law is hostile to it and irredentist movements often fail to achieve their goals. The term was originally coined from the Italian phrase Italia irredenta and referred to an Italian movement after 1878 claiming parts of Switzerland and the Austro-Hungarian Empire. Often discussed cases of irredentism include Nazi Germany's annexation of the Sudetenland, Somalia's invasion of Ethiopia, and Argentina's invasion of the Falkland Islands. Later examples are attempts to establish a Greater Serbia following the breakup of Yugoslavia and Russia's annexation of Crimea following the dissolution of the Soviet Union. Irredentism is closely related to revanchism and secession. Revanchism is an attempt to annex territory belonging to another state. It is motivated by the goal of taking revenge for a previous grievance, in contrast to the goal of irredentism of building an ethnically unified nation-state. In the case of secession, a territory breaks away and forms an independent state instead of merging with another state. | 2001-11-06T09:34:18Z | 2023-12-28T08:55:45Z | [
"Template:Multiref2",
"Template:Main",
"Template:Div col",
"Template:Reflist",
"Template:Cite news",
"Template:Cite NIE",
"Template:Wiktionary",
"Template:Good article",
"Template:Lang",
"Template:Notelist",
"Template:Cite book",
"Template:Cite journal",
"Template:Irredentism",
"Template:Autonomous types of first-tier administration",
"Template:Refbegin",
"Template:Cite EB1911",
"Template:Sfn",
"Template:Multiple image",
"Template:Annotated link",
"Template:Short description",
"Template:Div col end",
"Template:Refend",
"Template:Pan-nationalist concepts",
"Template:Authority control",
"Template:Efn",
"Template:Cite web",
"Template:Commons category",
"Template:Nationalism"
]
| https://en.wikipedia.org/wiki/Irredentism |
15,227 | Inuit languages | The Inuit languages are a closely related group of indigenous American languages traditionally spoken across the North American Arctic and the adjacent subarctic regions as far south as Labrador. The Inuit languages are one of the two branches of the Eskimoan language family, the other being the Yupik languages, which are spoken in Alaska and the Russian Far East. Most Inuit people live in one of three countries: Greenland, a self-governing territory within the Kingdom of Denmark; Canada, specifically in Nunavut, the Inuvialuit Settlement Region of the Northwest Territories, the Nunavik region of Quebec, and the Nunatsiavut and NunatuKavut regions of Labrador; and the United States, specifically in northern and western Alaska.
The total population of Inuit speaking their traditional languages is difficult to assess with precision, since most counts rely on self-reported census data that may not accurately reflect usage or competence. Greenland census estimates place the number of Inuit language speakers there at roughly 50,000. According to the 2021 Canadian census, the Inuit population of Canada is 70,540, of which 33,790 report Inuit as their first language. Greenland and Canada account for the bulk of Inuit speakers, although about 7,500 Alaskans speak some variety of an Inuit language out of a total population of over 13,000 Inuit. An estimated 7,000 Greenlandic Inuit live in Denmark, the largest group outside of North America. Thus, the total population of Inuit speakers is about 100,000 people.
The traditional language of the Inuit is a system of closely interrelated dialects that are not readily comprehensible from one end of the Inuit world to the other; some people do not think of it as a single language but rather a group of languages. However, there are no clear criteria for breaking the Inuit language into specific member languages since it forms a dialect continuum. Each band of Inuit understands its neighbours, and most likely its neighbours' neighbours; but at some remove, comprehensibility drops to a very low level.
As a result, Inuit in different places use different words for its own variants and for the entire group of languages, and this ambiguity has been carried into other languages, creating a great deal of confusion over what labels should be applied to it.
In Greenland the official form of Inuit language, and the official language of the state, is called Kalaallisut. In other languages, it is often called Greenlandic or some cognate term. The Inuit languages of Alaska are called Inupiatun, but the variants of the Seward Peninsula are distinguished from the other Alaskan variants by calling them Qawiaraq, or for some dialects, Bering Strait Inupiatun.
In Canada, the word Inuktitut is routinely used to refer to all Canadian variants of the Inuit traditional language, and it is under that name that it is recognised as one of the official languages of Nunavut and the Northwest Territories. However, one of the variants of western Nunavut, and the eastern Northwest Territories, is called Inuinnaqtun to distinguish itself from the dialects of eastern Canada, while the variants of the Northwest Territories are sometimes called Inuvialuktun and have in the past sometimes been called Inuktun. In those dialects, the name is sometimes rendered as Inuktitun to reflect dialectal differences in pronunciation. The Inuit language of Quebec is called Inuttitut by its speakers, and often by other people, but this is a minor variation in pronunciation. In Labrador, the language is called Inuttut or, often in official documents, by the more descriptive name Labradorimiutut. Furthermore, Canadians – both Inuit and non-Inuit – sometimes use the word Inuktitut to refer to all Inuit language variants, including those of Alaska and Greenland.
The phrase "Inuit language" is largely limited to professional discourse, since in each area, there is one or more conventional terms that cover all the local variants; or it is used as a descriptive term in publications where readers can't necessarily be expected to know the locally used words. In Nunavut the government groups all dialects of Inuktitut and Inuinnaqtun under the term Inuktut.
Although many people refer to the Inuit language as Eskimo language, this is a broad term that also includes the Yupik languages, and is in addition strongly discouraged in Canada and diminishing in usage elsewhere. See the article on Eskimo for more information on this word.
The Inuit languages constitute a branch of the Eskimo–Aleut language family. They are closely related to the Yupik languages and more remotely to Aleut. These other languages are all spoken in western Alaska, United States, and eastern Chukotka, Russia. They are not discernibly related to other indigenous languages of the Americas or northeast Asia, although there have been some unsubstantiated proposals that they are distantly related to the Uralic languages of western Siberia and northern Europe, in a proposed Uralo-Siberian grouping, or even to the Indo-European languages as part of a Nostratic superphylum. Some had previously lumped them in with the Paleosiberian languages, though that is a geographic rather than a linguistic grouping.
Early forms of the Inuit language are believed to have been spoken by the Thule people, who migrated east from Beringia towards the Arctic Archipelago, which had been occupied by people of the Dorset culture since the beginning of the 2nd millennium. By 1300, the Inuit and their language had reached western Greenland, and finally east Greenland roughly at the same time the Viking colonies in southern Greenland disappeared. It is generally believed that it was during this centuries-long eastward migration that the Inuit language became distinct from the Yupik languages spoken in Western Alaska and Chukotka.
Until 1902, a possible enclave of the Dorset, the Sadlermiut (in modern Inuktitut spelling Sallirmiut), existed on Southampton Island. Almost nothing is known about their language, but the few eyewitness accounts tell of them speaking a "strange dialect". This suggests that they also spoke an Inuit language, but one quite distinct from the forms spoken in Canada today.
The Yupik and Inuit languages are very similar syntactically and morphologically. Their common origin can be seen in a number of cognates:
The western Alaskan variants retain a large number of features present in proto-Inuit language and in Yup'ik, enough so that they might be classed as Yup'ik languages if they were viewed in isolation from the larger Inuit world.
The Inuit languages are a fairly closely linked set of languages which can be broken up using a number of different criteria. Traditionally, Inuit describe dialect differences by means of place names to describe local idiosyncrasies in language: The dialect of Igloolik versus the dialect of Iqaluit, for example. However, political and sociological divisions are increasingly the principal criteria for describing different variants of the Inuit languages because of their links to different writing systems, literary traditions, schools, media sources and borrowed vocabulary. This makes any partition of the Inuit language somewhat problematic. This article will use labels that try to synthesise linguistic, sociolinguistic and political considerations in splitting up the Inuit dialect spectrum. This scheme is not the only one used or necessarily one used by Inuit themselves, but its labels do try to reflect the usages most seen in popular and technical literature.
In addition to the territories listed below, some 7,000 Greenlandic speakers are reported to live in mainland Denmark, and according to the 2001 census roughly 200 self-reported Inuktitut native speakers regularly live in parts of Canada which are outside traditional Inuit lands.
Of the roughly 13,000 Alaskan Iñupiat, as few as 3000 may still be able to speak the Iñupiaq, with most of them over the age of 40. Alaskan Inupiat speak three distinct dialects, which have difficult mutual intelligibility:
The Inuit languages are official in the Northwest Territories and Nunavut (the dominant language in the latter); have a high level of official support in Nunavik, a semi-autonomous portion of Quebec; and are still spoken in some parts of Labrador. Generally, Canadians refer to all dialects spoken in Canada as Inuktitut, but the terms Inuvialuktun, Inuinnaqtun, and Inuttut (also called Nunatsiavummiutut, Labradorimiutut or Inuttitut) have some currency in referring to the variants of specific areas.
Greenland counts approximately 50,000 speakers of the Inuit languages, over 90% of whom speak west Greenlandic dialects at home.
Greenlandic was strongly supported by the Danish Christian mission (conducted by the Danish state church) in Greenland. Several major dictionaries were created, beginning with Poul Egedes's Dictionarium Grönlandico-danico-latinum (1750) and culminating with Samuel Kleinschmidt's (1871) "Den grønlandske ordbog" (Transl. "The Greenlandic Dictionary"), which contained a Greenlandic grammatical system that has formed the basis of modern Greenlandic grammar. Together with the fact that until 1925 Danish was not taught in the public schools, these policies had the consequence that Greenlandic has always and continues to enjoy a very strong position in Greenland, both as a spoken as well as written language.
Eastern Canadian Inuit language variants have fifteen consonants and three vowels (which can be long or short).
Consonants are arranged with five places of articulation: bilabial, alveolar, palatal, velar and uvular; and three manners of articulation: voiceless stops, voiced continuants, and nasals, as well as two additional sounds—voiceless fricatives. The Alaskan dialects have an additional manner of articulation, the retroflex, which was present in proto-Inuit language. Retroflexes have disappeared in all the Canadian and Greenlandic dialects. In Natsilingmiutut, the voiced palatal stop /ɟ/ derives from a former retroflex.
Almost all Inuit language variants have only three basic vowels and make a phonological distinction between short and long forms of all vowels. The only exceptions are at the extreme edges of the Inuit world: parts of Greenland, and in western Alaska.
The Inuit languages, like other Eskimo–Aleut languages, have very rich morphological systems in which a succession of different morphemes are added to root words (like verb endings in European languages) to indicate things that, in languages like English, would require several words to express. (See also: Agglutinative language and Polysynthetic language) All Inuit words begin with a root morpheme to which other morphemes are suffixed. The language has hundreds of distinct suffixes, in some dialects as many as 700. Fortunately for learners, the language has a highly regular morphology. Although the rules are sometimes very complicated, they do not have exceptions in the sense that English and other Indo-European languages do.
This system makes words very long, and potentially unique. For example, in central Nunavut Inuktitut:
This long word is composed of a root word tusaa- "to hear" followed by five suffixes:
This sort of word construction is pervasive in the Inuit languages and makes them very unlike English. In one large Canadian corpus – the Nunavut Hansard – 92% of all words appear only once, in contrast to a small percentage in most English corpora of similar size. This makes the application of Zipf's law quite difficult in the Inuit language. Furthermore, the notion of a part of speech can be somewhat complicated in the Inuit languages. Fully inflected verbs can be interpreted as nouns. The word ilisaijuq can be interpreted as a fully inflected verb: "he studies", but can also be interpreted as a noun: "student". That said, the meaning is probably obvious to a fluent speaker, when put in context.
The morphology and syntax of the Inuit languages vary to some degree between dialects, and the article Inuit grammar describes primarily central Nunavut dialects, but the basic principles will generally apply to all of them and to some degree to Yupik languages as well.
Both the names of places and people tend to be highly prosaic when translated. Iqaluit, for example, is simply the plural of the noun iqaluk "fish" ("Arctic char", "salmon" or "trout" depending on dialect). Igloolik (Iglulik) means place with houses, a word that could be interpreted as simply town; Inuvik is place of people; Baffin Island, Qikiqtaaluk in Inuktitut, translates approximately to "big island".
Although practically all Inuit have legal names based on southern naming traditions, at home and among themselves they still use native naming traditions. There too, names tend to consist of highly prosaic words. The Inuit traditionally believed that by adopting the name of a dead person or a class of things, they could take some of their characteristics or powers, and enjoy a part of their identity. (This is why they were always very willing to accept European names: they believed that this made them equal to the Europeans.)
Common native names in Canada include "Ujarak" (rock), "Nuvuk" (headland), "Nasak" (hat, or hood), "Tupiq" or "Tupeq" in Kalaallisut (tent), and "Qajaq" (kayak). Inuit also use animal names, traditionally believing that by using those names, they took on some of the characteristics of that animal: "Nanuq" or "Nanoq" in Kalaallisut (polar-bear), "Uqalik" or "Ukaleq" in Kalaallisut (Arctic hare), and "Tiriaq" or "Teriaq" in Kalaallisut (mouse) are favourites. In other cases, Inuit are named after dead people or people in traditional tales, by naming them after anatomical traits those people are believed to have had. Examples include "Itigaituk" (has no feet), "Anana" or "Anaana" (mother), "Piujuq" (beautiful) and "Tulimak" (rib). Inuit may have any number of names, given by parents and other community members.
In the 1920s, changes in lifestyle and serious epidemics like tuberculosis made the government of Canada interested in tracking the Inuit of Canada's Arctic. Traditionally Inuit names reflect what is important in Inuit culture: environment, landscape, seascape, family, animals, birds, spirits. However these traditional names were difficult for non-Inuit to parse. Also, the agglutinative nature of Inuit language meant that names seemed long and were difficult for southern bureaucrats and missionaries to pronounce.
Thus, in the 1940s, the Inuit were given disc numbers, recorded on a special leather ID tag, like a dog tag. They were required to keep the tag with them always. (Some tags are now so old and worn that the number is polished out.) The numbers were assigned with a letter prefix that indicated location (E = east), community, and then the order in which the census-taker saw the individual. In some ways this state renaming was abetted by the churches and missionaries, who viewed the traditional names and their calls to power as related to shamanism and paganism.
They encouraged people to take Christian names. So a young woman who was known to her relatives as "Lutaaq, Pilitaq, Palluq, or Inusiq" and had been baptised as "Annie" was under this system to become Annie E7-121. People adopted the number-names, their family members' numbers, etc., and learned all the region codes (like knowing a telephone area code).
Until Inuit began studying in the south, many did not know that numbers were not normal parts of Christian and English naming systems. Then in 1969, the government started Project Surname, headed by Abe Okpik, to replace number-names with patrilineal "family surnames".
A popular belief exists that the Inuit have an unusually large number of words for snow. This is not accurate, and results from a misunderstanding of the nature of polysynthetic languages. In fact, the Inuit have only a few base roots for snow: 'qanniq-' ('qanik-' in some dialects), which is used most often like the verb to snow, and 'aput', which means snow as a substance. Parts of speech work very differently in the Inuit language than in English, so these definitions are somewhat misleading.
The Inuit languages can form very long words by adding more and more descriptive affixes to words. Those affixes may modify the syntactic and semantic properties of the base word, or may add qualifiers to it in much the same way that English uses adjectives or prepositional phrases to qualify nouns (e.g. "falling snow", "blowing snow", "snow on the ground", "snow drift", etc.)
The "fact" that there are many Inuit words for snow has been put forward so often that it has become a journalistic cliché.
The Inuit use a base-20 counting system.
Because the Inuit languages are spread over such a large area, divided between different nations and political units and originally reached by Europeans of different origins at different times, there is no uniform way of writing the Inuit language.
Currently there are six "standard" ways to write the languages:
Though all except the syllabics use a Latin-based script, the alphabets differ in use of diacritics, non-Latin letters, etc. Most Inuktitut in Nunavut and Nunavik is written using a script called Inuktitut syllabics, based on Canadian Aboriginal syllabics. The western part of Nunavut and the Northwest Territories use a Latin-script alphabet usually identified as Inuinnaqtun. In Alaska, another Latin alphabet is used, with some characters using diacritics. Nunatsiavut uses an alphabet devised with German-speaking Moravian missionaries, which includes the letter kra. Greenland's Latin alphabet was originally much like the one used in Nunatsiavut, but underwent a spelling reform in 1973 to bring the orthography in line with changes in pronunciation and better reflect the phonemic inventory of the language.
Inuktitut syllabics, used in Canada, is based on Cree syllabics, devised by the missionary James Evans based on Devanagari, a Brahmi script. The present form of Canadian Inuktitut syllabics was adopted by the Inuit Cultural Institute in Canada in the 1970s.
Though presented in syllabic form, syllabics is not a true syllabary but an abugida, since syllables starting with the same consonant are written with graphically similar letters.
All of the characters needed for Inuktitut syllabics are available in the Unicode character repertoire, in the blocks Unified Canadian Aboriginal Syllabics.
The Canadian national organization Inuit Tapiriit Kanatami adopted Inuktut Qaliujaaqpait, a unified orthography for all varieties of Inuktitut, in September 2019. It is based on the Latin alphabet without diacritics. | [
{
"paragraph_id": 0,
"text": "The Inuit languages are a closely related group of indigenous American languages traditionally spoken across the North American Arctic and the adjacent subarctic regions as far south as Labrador. The Inuit languages are one of the two branches of the Eskimoan language family, the other being the Yupik languages, which are spoken in Alaska and the Russian Far East. Most Inuit people live in one of three countries: Greenland, a self-governing territory within the Kingdom of Denmark; Canada, specifically in Nunavut, the Inuvialuit Settlement Region of the Northwest Territories, the Nunavik region of Quebec, and the Nunatsiavut and NunatuKavut regions of Labrador; and the United States, specifically in northern and western Alaska.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The total population of Inuit speaking their traditional languages is difficult to assess with precision, since most counts rely on self-reported census data that may not accurately reflect usage or competence. Greenland census estimates place the number of Inuit language speakers there at roughly 50,000. According to the 2021 Canadian census, the Inuit population of Canada is 70,540, of which 33,790 report Inuit as their first language. Greenland and Canada account for the bulk of Inuit speakers, although about 7,500 Alaskans speak some variety of an Inuit language out of a total population of over 13,000 Inuit. An estimated 7,000 Greenlandic Inuit live in Denmark, the largest group outside of North America. Thus, the total population of Inuit speakers is about 100,000 people.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The traditional language of the Inuit is a system of closely interrelated dialects that are not readily comprehensible from one end of the Inuit world to the other; some people do not think of it as a single language but rather a group of languages. However, there are no clear criteria for breaking the Inuit language into specific member languages since it forms a dialect continuum. Each band of Inuit understands its neighbours, and most likely its neighbours' neighbours; but at some remove, comprehensibility drops to a very low level.",
"title": "Nomenclature"
},
{
"paragraph_id": 3,
"text": "As a result, Inuit in different places use different words for its own variants and for the entire group of languages, and this ambiguity has been carried into other languages, creating a great deal of confusion over what labels should be applied to it.",
"title": "Nomenclature"
},
{
"paragraph_id": 4,
"text": "In Greenland the official form of Inuit language, and the official language of the state, is called Kalaallisut. In other languages, it is often called Greenlandic or some cognate term. The Inuit languages of Alaska are called Inupiatun, but the variants of the Seward Peninsula are distinguished from the other Alaskan variants by calling them Qawiaraq, or for some dialects, Bering Strait Inupiatun.",
"title": "Nomenclature"
},
{
"paragraph_id": 5,
"text": "In Canada, the word Inuktitut is routinely used to refer to all Canadian variants of the Inuit traditional language, and it is under that name that it is recognised as one of the official languages of Nunavut and the Northwest Territories. However, one of the variants of western Nunavut, and the eastern Northwest Territories, is called Inuinnaqtun to distinguish itself from the dialects of eastern Canada, while the variants of the Northwest Territories are sometimes called Inuvialuktun and have in the past sometimes been called Inuktun. In those dialects, the name is sometimes rendered as Inuktitun to reflect dialectal differences in pronunciation. The Inuit language of Quebec is called Inuttitut by its speakers, and often by other people, but this is a minor variation in pronunciation. In Labrador, the language is called Inuttut or, often in official documents, by the more descriptive name Labradorimiutut. Furthermore, Canadians – both Inuit and non-Inuit – sometimes use the word Inuktitut to refer to all Inuit language variants, including those of Alaska and Greenland.",
"title": "Nomenclature"
},
{
"paragraph_id": 6,
"text": "The phrase \"Inuit language\" is largely limited to professional discourse, since in each area, there is one or more conventional terms that cover all the local variants; or it is used as a descriptive term in publications where readers can't necessarily be expected to know the locally used words. In Nunavut the government groups all dialects of Inuktitut and Inuinnaqtun under the term Inuktut.",
"title": "Nomenclature"
},
{
"paragraph_id": 7,
"text": "Although many people refer to the Inuit language as Eskimo language, this is a broad term that also includes the Yupik languages, and is in addition strongly discouraged in Canada and diminishing in usage elsewhere. See the article on Eskimo for more information on this word.",
"title": "Nomenclature"
},
{
"paragraph_id": 8,
"text": "The Inuit languages constitute a branch of the Eskimo–Aleut language family. They are closely related to the Yupik languages and more remotely to Aleut. These other languages are all spoken in western Alaska, United States, and eastern Chukotka, Russia. They are not discernibly related to other indigenous languages of the Americas or northeast Asia, although there have been some unsubstantiated proposals that they are distantly related to the Uralic languages of western Siberia and northern Europe, in a proposed Uralo-Siberian grouping, or even to the Indo-European languages as part of a Nostratic superphylum. Some had previously lumped them in with the Paleosiberian languages, though that is a geographic rather than a linguistic grouping.",
"title": "Classification and history"
},
{
"paragraph_id": 9,
"text": "Early forms of the Inuit language are believed to have been spoken by the Thule people, who migrated east from Beringia towards the Arctic Archipelago, which had been occupied by people of the Dorset culture since the beginning of the 2nd millennium. By 1300, the Inuit and their language had reached western Greenland, and finally east Greenland roughly at the same time the Viking colonies in southern Greenland disappeared. It is generally believed that it was during this centuries-long eastward migration that the Inuit language became distinct from the Yupik languages spoken in Western Alaska and Chukotka.",
"title": "Classification and history"
},
{
"paragraph_id": 10,
"text": "Until 1902, a possible enclave of the Dorset, the Sadlermiut (in modern Inuktitut spelling Sallirmiut), existed on Southampton Island. Almost nothing is known about their language, but the few eyewitness accounts tell of them speaking a \"strange dialect\". This suggests that they also spoke an Inuit language, but one quite distinct from the forms spoken in Canada today.",
"title": "Classification and history"
},
{
"paragraph_id": 11,
"text": "The Yupik and Inuit languages are very similar syntactically and morphologically. Their common origin can be seen in a number of cognates:",
"title": "Classification and history"
},
{
"paragraph_id": 12,
"text": "The western Alaskan variants retain a large number of features present in proto-Inuit language and in Yup'ik, enough so that they might be classed as Yup'ik languages if they were viewed in isolation from the larger Inuit world.",
"title": "Classification and history"
},
{
"paragraph_id": 13,
"text": "The Inuit languages are a fairly closely linked set of languages which can be broken up using a number of different criteria. Traditionally, Inuit describe dialect differences by means of place names to describe local idiosyncrasies in language: The dialect of Igloolik versus the dialect of Iqaluit, for example. However, political and sociological divisions are increasingly the principal criteria for describing different variants of the Inuit languages because of their links to different writing systems, literary traditions, schools, media sources and borrowed vocabulary. This makes any partition of the Inuit language somewhat problematic. This article will use labels that try to synthesise linguistic, sociolinguistic and political considerations in splitting up the Inuit dialect spectrum. This scheme is not the only one used or necessarily one used by Inuit themselves, but its labels do try to reflect the usages most seen in popular and technical literature.",
"title": "Geographic distribution and variants"
},
{
"paragraph_id": 14,
"text": "In addition to the territories listed below, some 7,000 Greenlandic speakers are reported to live in mainland Denmark, and according to the 2001 census roughly 200 self-reported Inuktitut native speakers regularly live in parts of Canada which are outside traditional Inuit lands.",
"title": "Geographic distribution and variants"
},
{
"paragraph_id": 15,
"text": "Of the roughly 13,000 Alaskan Iñupiat, as few as 3000 may still be able to speak the Iñupiaq, with most of them over the age of 40. Alaskan Inupiat speak three distinct dialects, which have difficult mutual intelligibility:",
"title": "Geographic distribution and variants"
},
{
"paragraph_id": 16,
"text": "The Inuit languages are official in the Northwest Territories and Nunavut (the dominant language in the latter); have a high level of official support in Nunavik, a semi-autonomous portion of Quebec; and are still spoken in some parts of Labrador. Generally, Canadians refer to all dialects spoken in Canada as Inuktitut, but the terms Inuvialuktun, Inuinnaqtun, and Inuttut (also called Nunatsiavummiutut, Labradorimiutut or Inuttitut) have some currency in referring to the variants of specific areas.",
"title": "Geographic distribution and variants"
},
{
"paragraph_id": 17,
"text": "Greenland counts approximately 50,000 speakers of the Inuit languages, over 90% of whom speak west Greenlandic dialects at home.",
"title": "Geographic distribution and variants"
},
{
"paragraph_id": 18,
"text": "Greenlandic was strongly supported by the Danish Christian mission (conducted by the Danish state church) in Greenland. Several major dictionaries were created, beginning with Poul Egedes's Dictionarium Grönlandico-danico-latinum (1750) and culminating with Samuel Kleinschmidt's (1871) \"Den grønlandske ordbog\" (Transl. \"The Greenlandic Dictionary\"), which contained a Greenlandic grammatical system that has formed the basis of modern Greenlandic grammar. Together with the fact that until 1925 Danish was not taught in the public schools, these policies had the consequence that Greenlandic has always and continues to enjoy a very strong position in Greenland, both as a spoken as well as written language.",
"title": "Geographic distribution and variants"
},
{
"paragraph_id": 19,
"text": "Eastern Canadian Inuit language variants have fifteen consonants and three vowels (which can be long or short).",
"title": "Phonology and phonetics"
},
{
"paragraph_id": 20,
"text": "Consonants are arranged with five places of articulation: bilabial, alveolar, palatal, velar and uvular; and three manners of articulation: voiceless stops, voiced continuants, and nasals, as well as two additional sounds—voiceless fricatives. The Alaskan dialects have an additional manner of articulation, the retroflex, which was present in proto-Inuit language. Retroflexes have disappeared in all the Canadian and Greenlandic dialects. In Natsilingmiutut, the voiced palatal stop /ɟ/ derives from a former retroflex.",
"title": "Phonology and phonetics"
},
{
"paragraph_id": 21,
"text": "Almost all Inuit language variants have only three basic vowels and make a phonological distinction between short and long forms of all vowels. The only exceptions are at the extreme edges of the Inuit world: parts of Greenland, and in western Alaska.",
"title": "Phonology and phonetics"
},
{
"paragraph_id": 22,
"text": "The Inuit languages, like other Eskimo–Aleut languages, have very rich morphological systems in which a succession of different morphemes are added to root words (like verb endings in European languages) to indicate things that, in languages like English, would require several words to express. (See also: Agglutinative language and Polysynthetic language) All Inuit words begin with a root morpheme to which other morphemes are suffixed. The language has hundreds of distinct suffixes, in some dialects as many as 700. Fortunately for learners, the language has a highly regular morphology. Although the rules are sometimes very complicated, they do not have exceptions in the sense that English and other Indo-European languages do.",
"title": "Morphology and syntax"
},
{
"paragraph_id": 23,
"text": "This system makes words very long, and potentially unique. For example, in central Nunavut Inuktitut:",
"title": "Morphology and syntax"
},
{
"paragraph_id": 24,
"text": "This long word is composed of a root word tusaa- \"to hear\" followed by five suffixes:",
"title": "Morphology and syntax"
},
{
"paragraph_id": 25,
"text": "This sort of word construction is pervasive in the Inuit languages and makes them very unlike English. In one large Canadian corpus – the Nunavut Hansard – 92% of all words appear only once, in contrast to a small percentage in most English corpora of similar size. This makes the application of Zipf's law quite difficult in the Inuit language. Furthermore, the notion of a part of speech can be somewhat complicated in the Inuit languages. Fully inflected verbs can be interpreted as nouns. The word ilisaijuq can be interpreted as a fully inflected verb: \"he studies\", but can also be interpreted as a noun: \"student\". That said, the meaning is probably obvious to a fluent speaker, when put in context.",
"title": "Morphology and syntax"
},
{
"paragraph_id": 26,
"text": "The morphology and syntax of the Inuit languages vary to some degree between dialects, and the article Inuit grammar describes primarily central Nunavut dialects, but the basic principles will generally apply to all of them and to some degree to Yupik languages as well.",
"title": "Morphology and syntax"
},
{
"paragraph_id": 27,
"text": "Both the names of places and people tend to be highly prosaic when translated. Iqaluit, for example, is simply the plural of the noun iqaluk \"fish\" (\"Arctic char\", \"salmon\" or \"trout\" depending on dialect). Igloolik (Iglulik) means place with houses, a word that could be interpreted as simply town; Inuvik is place of people; Baffin Island, Qikiqtaaluk in Inuktitut, translates approximately to \"big island\".",
"title": "Vocabulary"
},
{
"paragraph_id": 28,
"text": "Although practically all Inuit have legal names based on southern naming traditions, at home and among themselves they still use native naming traditions. There too, names tend to consist of highly prosaic words. The Inuit traditionally believed that by adopting the name of a dead person or a class of things, they could take some of their characteristics or powers, and enjoy a part of their identity. (This is why they were always very willing to accept European names: they believed that this made them equal to the Europeans.)",
"title": "Vocabulary"
},
{
"paragraph_id": 29,
"text": "Common native names in Canada include \"Ujarak\" (rock), \"Nuvuk\" (headland), \"Nasak\" (hat, or hood), \"Tupiq\" or \"Tupeq\" in Kalaallisut (tent), and \"Qajaq\" (kayak). Inuit also use animal names, traditionally believing that by using those names, they took on some of the characteristics of that animal: \"Nanuq\" or \"Nanoq\" in Kalaallisut (polar-bear), \"Uqalik\" or \"Ukaleq\" in Kalaallisut (Arctic hare), and \"Tiriaq\" or \"Teriaq\" in Kalaallisut (mouse) are favourites. In other cases, Inuit are named after dead people or people in traditional tales, by naming them after anatomical traits those people are believed to have had. Examples include \"Itigaituk\" (has no feet), \"Anana\" or \"Anaana\" (mother), \"Piujuq\" (beautiful) and \"Tulimak\" (rib). Inuit may have any number of names, given by parents and other community members.",
"title": "Vocabulary"
},
{
"paragraph_id": 30,
"text": "In the 1920s, changes in lifestyle and serious epidemics like tuberculosis made the government of Canada interested in tracking the Inuit of Canada's Arctic. Traditionally Inuit names reflect what is important in Inuit culture: environment, landscape, seascape, family, animals, birds, spirits. However these traditional names were difficult for non-Inuit to parse. Also, the agglutinative nature of Inuit language meant that names seemed long and were difficult for southern bureaucrats and missionaries to pronounce.",
"title": "Vocabulary"
},
{
"paragraph_id": 31,
"text": "Thus, in the 1940s, the Inuit were given disc numbers, recorded on a special leather ID tag, like a dog tag. They were required to keep the tag with them always. (Some tags are now so old and worn that the number is polished out.) The numbers were assigned with a letter prefix that indicated location (E = east), community, and then the order in which the census-taker saw the individual. In some ways this state renaming was abetted by the churches and missionaries, who viewed the traditional names and their calls to power as related to shamanism and paganism.",
"title": "Vocabulary"
},
{
"paragraph_id": 32,
"text": "They encouraged people to take Christian names. So a young woman who was known to her relatives as \"Lutaaq, Pilitaq, Palluq, or Inusiq\" and had been baptised as \"Annie\" was under this system to become Annie E7-121. People adopted the number-names, their family members' numbers, etc., and learned all the region codes (like knowing a telephone area code).",
"title": "Vocabulary"
},
{
"paragraph_id": 33,
"text": "Until Inuit began studying in the south, many did not know that numbers were not normal parts of Christian and English naming systems. Then in 1969, the government started Project Surname, headed by Abe Okpik, to replace number-names with patrilineal \"family surnames\".",
"title": "Vocabulary"
},
{
"paragraph_id": 34,
"text": "A popular belief exists that the Inuit have an unusually large number of words for snow. This is not accurate, and results from a misunderstanding of the nature of polysynthetic languages. In fact, the Inuit have only a few base roots for snow: 'qanniq-' ('qanik-' in some dialects), which is used most often like the verb to snow, and 'aput', which means snow as a substance. Parts of speech work very differently in the Inuit language than in English, so these definitions are somewhat misleading.",
"title": "Vocabulary"
},
{
"paragraph_id": 35,
"text": "The Inuit languages can form very long words by adding more and more descriptive affixes to words. Those affixes may modify the syntactic and semantic properties of the base word, or may add qualifiers to it in much the same way that English uses adjectives or prepositional phrases to qualify nouns (e.g. \"falling snow\", \"blowing snow\", \"snow on the ground\", \"snow drift\", etc.)",
"title": "Vocabulary"
},
{
"paragraph_id": 36,
"text": "The \"fact\" that there are many Inuit words for snow has been put forward so often that it has become a journalistic cliché.",
"title": "Vocabulary"
},
{
"paragraph_id": 37,
"text": "The Inuit use a base-20 counting system.",
"title": "Vocabulary"
},
{
"paragraph_id": 38,
"text": "Because the Inuit languages are spread over such a large area, divided between different nations and political units and originally reached by Europeans of different origins at different times, there is no uniform way of writing the Inuit language.",
"title": "Writing"
},
{
"paragraph_id": 39,
"text": "Currently there are six \"standard\" ways to write the languages:",
"title": "Writing"
},
{
"paragraph_id": 40,
"text": "Though all except the syllabics use a Latin-based script, the alphabets differ in use of diacritics, non-Latin letters, etc. Most Inuktitut in Nunavut and Nunavik is written using a script called Inuktitut syllabics, based on Canadian Aboriginal syllabics. The western part of Nunavut and the Northwest Territories use a Latin-script alphabet usually identified as Inuinnaqtun. In Alaska, another Latin alphabet is used, with some characters using diacritics. Nunatsiavut uses an alphabet devised with German-speaking Moravian missionaries, which includes the letter kra. Greenland's Latin alphabet was originally much like the one used in Nunatsiavut, but underwent a spelling reform in 1973 to bring the orthography in line with changes in pronunciation and better reflect the phonemic inventory of the language.",
"title": "Writing"
},
{
"paragraph_id": 41,
"text": "Inuktitut syllabics, used in Canada, is based on Cree syllabics, devised by the missionary James Evans based on Devanagari, a Brahmi script. The present form of Canadian Inuktitut syllabics was adopted by the Inuit Cultural Institute in Canada in the 1970s.",
"title": "Writing"
},
{
"paragraph_id": 42,
"text": "Though presented in syllabic form, syllabics is not a true syllabary but an abugida, since syllables starting with the same consonant are written with graphically similar letters.",
"title": "Writing"
},
{
"paragraph_id": 43,
"text": "All of the characters needed for Inuktitut syllabics are available in the Unicode character repertoire, in the blocks Unified Canadian Aboriginal Syllabics.",
"title": "Writing"
},
{
"paragraph_id": 44,
"text": "The Canadian national organization Inuit Tapiriit Kanatami adopted Inuktut Qaliujaaqpait, a unified orthography for all varieties of Inuktitut, in September 2019. It is based on the Latin alphabet without diacritics.",
"title": "Writing"
}
]
| The Inuit languages are a closely related group of indigenous American languages traditionally spoken across the North American Arctic and the adjacent subarctic regions as far south as Labrador. The Inuit languages are one of the two branches of the Eskimoan language family, the other being the Yupik languages, which are spoken in Alaska and the Russian Far East. Most Inuit people live in one of three countries: Greenland, a self-governing territory within the Kingdom of Denmark; Canada, specifically in Nunavut, the Inuvialuit Settlement Region of the Northwest Territories, the Nunavik region of Quebec, and the Nunatsiavut and NunatuKavut regions of Labrador; and the United States, specifically in northern and western Alaska. The total population of Inuit speaking their traditional languages is difficult to assess with precision, since most counts rely on self-reported census data that may not accurately reflect usage or competence. Greenland census estimates place the number of Inuit language speakers there at roughly 50,000. According to the 2021 Canadian census, the Inuit population of Canada is 70,540, of which 33,790 report Inuit as their first language. Greenland and Canada account for the bulk of Inuit speakers, although about 7,500 Alaskans speak some variety of an Inuit language out of a total population of over 13,000 Inuit. An estimated 7,000 Greenlandic Inuit live in Denmark, the largest group outside of North America. Thus, the total population of Inuit speakers is about 100,000 people. | 2001-11-07T21:58:36Z | 2023-11-03T03:22:03Z | [
"Template:Short description",
"Template:Further",
"Template:Eskaleut languages",
"Template:Languages of Alaska",
"Template:Infobox language family",
"Template:Languages of Greenland",
"Template:IPA",
"Template:Webarchive",
"Template:Greenlandic language",
"Template:Languages of Quebec",
"Template:Inuit",
"Template:Cleanup lang",
"Template:Wikt-lang",
"Template:Transliteration",
"Template:Citation needed",
"Template:Cite news",
"Template:Small",
"Template:Languages of Yukon",
"Template:More citations needed",
"Template:Indigenous Peoples of Canada",
"Template:Lang",
"Template:Main",
"Template:For",
"Template:Cite book",
"Template:Languages of the United States",
"Template:Languages of Nunavut",
"Template:Authority control",
"Template:Reflist",
"Template:ISBN",
"Template:Cite web",
"Template:Commons category",
"Template:Languages of Canada"
]
| https://en.wikipedia.org/wiki/Inuit_languages |
15,229 | Ibn Battuta | Abu Abdullah Muhammad ibn Battutah (/ˌɪbən bætˈtuːtɑː/; 24 February 1304 – 1368/1369), commonly known as Ibn Battuta, was a Maghrebi traveller, explorer and scholar. Over a period of thirty years from 1325 to 1354, Ibn Battuta visited most of North Africa, the Middle East, East Africa, Central Asia, South Asia, Southeast Asia, China, the Iberian Peninsula, and West Africa. Near the end of his life, he dictated an account of his journeys, titled A Gift to Those Who Contemplate the Wonders of Cities and the Marvels of Travelling, but commonly known as The Rihla.
Ibn Battuta travelled more than any other explorer in pre-modern history, totalling around 117,000 km (73,000 mi), surpassing Zheng He with about 50,000 km (31,000 mi) and Marco Polo with 24,000 km (15,000 mi). There have been doubts over the historicity of some of Ibn Battuta's travels, particularly as they reach farther East.
Ibn Battuta is a patronymic literally meaning "son of the duckling". His most common full name is given as Abu Abdullah Muhammad ibn Battuta. In his travelogue, the Rihla, he gives his full name as Shams al-Din Abu’Abdallah Muhammad ibn’Abdallah ibn Muhammad ibn Ibrahim ibn Muhammad ibn Yusuf Lawati al-Tanji ibn Battuta.
All that is known about Ibn Battuta's life comes from the autobiographical information included in the account of his travels, which records that he was of Berber descent, born into a family of Islamic legal scholars in Tangier, known as qadis in the Muslim tradition in Morocco, on 24 February 1304, during the reign of the Marinid dynasty. His family belonged to a Berber tribe known as the Lawata. As a young man, he would have studied at a Sunni Maliki madhhab (Islamic jurisprudence school), the dominant form of education in North Africa at that time. Maliki Muslims requested Ibn Battuta serve as their religious judge, as he was from an area where it was practised.
On 2 Rajab in the Muslim year 725 Anno Hegirae (14 June 1325 Anno Domini on the Christian calendar), at the age of twenty-one, Ibn Battuta set off from his home town on a hajj, or pilgrimage, to Mecca, a journey that would ordinarily take sixteen months. He was eager to learn more about far-away lands and craved adventure. No one knew that he would not return to Morocco again for 24 years.
I set out alone, having neither fellow-traveller in whose companionship I might find cheer, nor caravan whose part I might join, but swayed by an overmastering impulse within me and a desire long-cherished in my bosom to visit these illustrious sanctuaries. So I braced my resolution to quit my dear ones, female and male, and forsook my home as birds forsake their nests. My parents being yet in the bonds of life, it weighed sorely upon me to part from them, and both they and I were afflicted with sorrow at this separation.
He travelled to Mecca overland, following the North African coast across the sultanates of Abd al-Wadid and Hafsid. The route took him through Tlemcen, Béjaïa, and then Tunis, where he stayed for two months. For safety, Ibn Battuta usually joined a caravan to reduce the risk of being robbed. He took a bride in the town of Sfax, but soon left her due to a dispute with the father. That was the first in a series of marriages that would feature in his travels.
In the early spring of 1326, after a journey of over 3,500 km (2,200 mi), Ibn Battuta arrived at the port of Alexandria, at the time part of the Bahri Mamluk empire. He met two ascetic pious men in Alexandria. One was Sheikh Burhanuddin, who is supposed to have foretold the destiny of Ibn Battuta as a world traveller and told him, "It seems to me that you are fond of foreign travel. You must visit my brother Fariduddin in India, Rukonuddin in Sind, and Burhanuddin in China. Convey my greetings to them." Another pious man Sheikh Murshidi interpreted the meaning of a dream of Ibn Battuta that he was meant to be a world traveller.
He spent several weeks visiting sites in the area, and then headed inland to Cairo, the capital of the Mamluk Sultanate and an important city. After spending about a month in Cairo, he embarked on the first of many detours within the relative safety of Mamluk territory. Of the three usual routes to Mecca, Ibn Battuta chose the least-travelled, which involved a journey up the Nile valley, then east to the Red Sea port of Aydhab. Upon approaching the town, however, a local rebellion forced him to turn back.
Ibn Battuta returned to Cairo and took a second side trip, this time to Mamluk-controlled Damascus. During his first trip he had encountered a holy man who prophesied that he would only reach Mecca by travelling through Syria. The diversion held an added advantage; because of the holy places that lay along the way, including Hebron, Jerusalem, and Bethlehem, the Mamluk authorities spared no efforts in keeping the route safe for pilgrims. Without this help many travellers would be robbed and murdered.
After spending the Muslim month of Ramadan, during August, in Damascus, he joined a caravan travelling the 1,300 km (810 mi) south to Medina, site of the Mosque of the Islamic prophet Muhammad. After four days in the town, he journeyed on to Mecca while visiting holy sites along the way; upon his arrival to Mecca he completed his first pilgrimage, in November, and he took the honorific status of El-Hajji. Rather than returning home, Ibn Battuta decided to continue travelling, choosing as his next destination the Ilkhanate, a Mongol Khanate, to the northeast.
On 17 November 1326, following a month spent in Mecca, Ibn Battuta joined a large caravan of pilgrims returning to Iraq across the Arabian Peninsula. The group headed north to Medina and then, travelling at night, turned northeast across the Najd plateau to Najaf, on a journey that lasted about two weeks. In Najaf, he visited the mausoleum of Ali, the Fourth Caliph.
Then, instead of continuing to Baghdad with the caravan, Ibn Battuta started a six-month detour that took him into Iran. From Najaf, he journeyed to Wasit, then followed the river Tigris south to Basra. His next destination was the town of Isfahan across the Zagros Mountains in Iran. He then headed south to Shiraz, a large, flourishing city spared the destruction wrought by Mongol invaders on many more northerly towns. Finally, he returned across the mountains to Baghdad, arriving there in June 1327. Parts of the city were still ruined from the damage inflicted by Hulagu Khan's invading army in 1258.
In Baghdad, he found Abu Sa'id, the last Mongol ruler of the unified Ilkhanate, leaving the city and heading north with a large retinue. Ibn Battuta joined the royal caravan for a while, then turned north on the Silk Road to Tabriz, the first major city in the region to open its gates to the Mongols and by then an important trading centre as most of its nearby rivals had been razed by the Mongol invaders.
Ibn Battuta left again for Baghdad, probably in July, but first took an excursion northwards along the river Tigris. He visited Mosul, where he was the guest of the Ilkhanate governor, and then the towns of Cizre (Jazirat ibn 'Umar) and Mardin in modern-day Turkey. At a hermitage on a mountain near Sinjar, he met a Kurdish mystic who gave him some silver coins. Once back in Mosul, he joined a "feeder" caravan of pilgrims heading south to Baghdad, where they would meet up with the main caravan that crossed the Arabian Desert to Mecca. Ill with diarrhoea, he arrived in the city weak and exhausted for his second hajj.
Ibn Battuta remained in Mecca for some time (the Rihla suggests about three years, from September 1327 until autumn 1330). Problems with chronology, however, lead commentators to suggest that he may have left after the 1328 hajj.
After the hajj in either 1328 or 1330, he made his way to the port of Jeddah on the Red Sea coast. From there he followed the coast in a series of boats (known as a jalbah, these were small craft made of wooden planks sewn together, lacking an established phrase) making slow progress against the prevailing south-easterly winds. Once in Yemen he visited Zabīd and later the highland town of Ta'izz, where he met the Rasulid dynasty king (Malik) Mujahid Nur al-Din Ali. Ibn Battuta also mentions visiting Sana'a, but whether he actually did so is doubtful. In all likelihood, he went directly from Ta'izz to the important trading port of Aden, arriving around the beginning of 1329 or 1331.
From Aden, Ibn Battuta embarked on a ship heading for Zeila on the coast of Somalia. He then moved on to Cape Guardafui further down the Somali seaboard, spending about a week in each location. Later he would visit Mogadishu, the then pre-eminent city of the "Land of the Berbers" (بلد البربر Balad al-Barbar, the medieval Arabic term for the Horn of Africa).
When Ibn Battuta arrived in 1332, Mogadishu stood at the zenith of its prosperity. He described it as "an exceedingly large city" with many rich merchants, noted for its high-quality fabric that was exported to other countries, including Egypt. Battuta added that the city was ruled by a Somali Sultan, Abu Bakr ibn Shaikh 'Umar. He noted that Sultan Abu Bakr had dark skin complexion and spoke in his native tongue (Somali), but was also fluent in Arabic. The Sultan also had a retinue of wazirs (ministers), legal experts, commanders, royal eunuchs, and other officials at his beck and call.
Ibn Battuta continued by ship south to the Swahili coast, a region then known in Arabic as the Bilad al-Zanj ("Land of the Zanj") with an overnight stop at the island town of Mombasa. Although relatively small at the time, Mombasa would become important in the following century. After a journey along the coast, Ibn Battuta next arrived in the island town of Kilwa in present-day Tanzania, which had become an important transit centre of the gold trade. He described the city as "one of the finest and most beautifully built towns; all the buildings are of wood, and the houses are roofed with dīs reeds".
Ibn Battuta recorded his visit to the Kilwa Sultanate in 1330, and commented favourably on the humility and religion of its ruler, Sultan al-Hasan ibn Sulaiman, a descendant of the legendary Ali ibn al-Hassan Shirazi. He further wrote that the authority of the Sultan extended from Malindi in the north to Inhambane in the south and was particularly impressed by the planning of the city, believing it to be the reason for Kilwa's success along the coast. During this period, he described the construction of the Palace of Husuni Kubwa and a significant extension to the Great Mosque of Kilwa, which was made of coral stones and was the largest mosque of its kind. With a change in the monsoon winds, Ibn Battuta sailed back to Arabia, first to Oman and the Strait of Hormuz then on to Mecca for the hajj of 1330 (or 1332).
After his third pilgrimage to Mecca, Ibn Battuta decided to seek employment with the Sultan of Delhi, Muhammad bin Tughluq. In the autumn of 1330 (or 1332), he set off for the Seljuk controlled territory of Anatolia to take an overland route to India. He crossed the Red Sea and the Eastern Desert to reach the Nile valley and then headed north to Cairo. From there he crossed the Sinai Peninsula to Palestine and then travelled north again through some of the towns that he had visited in 1326. From the Syrian port of Latakia, a Genoese ship took him (and his companions) to Alanya on the southern coast of modern-day Turkey.
He then journeyed westwards along the coast to the port of Antalya. In the town he met members of one of the semi-religious fityan associations. These were a feature of most Anatolian towns in the 13th and 14th centuries. The members were young artisans and had at their head a leader with the title of Akhil. The associations specialised in welcoming travellers. Ibn Battuta was very impressed with the hospitality that he received and would later stay in their hospices in more than 25 towns in Anatolia. From Antalya Ibn Battuta headed inland to Eğirdir which was the capital of the Hamidids. He spent Ramadan (June 1331 or May 1333) in the city.
From this point his itinerary across Anatolia in the Rihla becomes confused. Ibn Battuta describes travelling westwards from Eğirdir to Milas and then skipping 420 km (260 mi) eastward past Eğirdir to Konya. He then continues travelling in an easterly direction, reaching Erzurum from where he skips 1,160 km (720 mi) back to Birgi which lies north of Milas. Historians believe that Ibn Battuta visited a number of towns in central Anatolia, but not in the order in which he describes.
When Ibn Battuta arrived in Iznik, it had just been conquered by Orhan, Sultan of the nascent Ottoman Empire. Orhan was away and his wife was in command of the nearby stationed soldiers, Ibn Battuta gave this account of Orhan's wife: "A pious and excellent woman. She treated me honourably, gave me hospitality and sent gifts."
Ibn Battuta's account of Orhan:
The greatest of the kings of the Turkmens and the richest in wealth, lands and military forces. Of fortresses, he possesses nearly a hundred, and for most of his time, he is continually engaged in making a round of them, staying in each fortress for some days to put it in good order and examine its condition. It is said that he has never stayed for a whole month in any one town. He also fights with the infidels continually and keeps them under siege.
Ibn Battuta had also visited Bursa which at the time was the capital of the Ottoman Beylik, he described Bursa as "a great and important city with fine bazaars and wide streets, surrounded on all sides with gardens and running springs".
He also visited the Beylik of Aydin. Ibn Battuta stated that the ruler of the Beylik of Aydin had twenty Greek slaves at the entrance of his palace and Ibn Battuta was given a Greek slave as a gift. His visit to Anatolia was the first time in his travels he acquired a servant; the ruler of Aydin gifted him his first slave. Later, he purchased a young Greek girl for 40 dinars in Ephesus, was gifted another slave in Izmir by the Sultan, and purchased a second girl in Balikesir. The conspicuous evidence of his wealth and prestige continued to grow.
From Sinope he took a sea route to the Crimean Peninsula, arriving in the Golden Horde realm. He went to the port town of Azov, where he met with the emir of the Khan, then to the large and rich city of Majar. He left Majar to meet with Uzbeg Khan's travelling court (Orda), which was at the time near Mount Beshtau. From there he made a journey to Bolghar, which became the northernmost point he reached, and noted its unusually short nights in summer (by the standards of the subtropics). Then he returned to the Khan's court and with it moved to Astrakhan.
Ibn Battuta recorded that while in Bolghar he wanted to travel further north into the land of darkness. The land is snow-covered throughout (northern Siberia) and the only means of transport is dog-drawn sled. There lived a mysterious people who were reluctant to show themselves. They traded with southern people in a peculiar way. Southern merchants brought various goods and placed them in an open area on the snow in the night, then returned to their tents. Next morning they came to the place again and found their merchandise taken by the mysterious people, but in exchange they found fur-skins which could be used for making valuable coats, jackets, and other winter garments. The trade was done between merchants and the mysterious people without seeing each other. As Ibn Battuta was not a merchant and saw no benefit of going there he abandoned the travel to this land of darkness.
When they reached Astrakhan, Öz Beg Khan had just given permission for one of his pregnant wives, Princess Bayalun, a daughter of Byzantine emperor Andronikos III Palaiologos, to return to her home city of Constantinople to give birth. Ibn Battuta talked his way into this expedition, which would be his first beyond the boundaries of the Islamic world.
Arriving in Constantinople towards the end of 1332 (or 1334), he met the Byzantine emperor Andronikos III Palaiologos. He visited the great church of Hagia Sophia and spoke with an Eastern Orthodox priest about his travels in the city of Jerusalem. After a month in the city, Ibn Battuta returned to Astrakhan, then arrived in the capital city Sarai al-Jadid and reported the accounts of his travels to Sultan Öz Beg Khan (r. 1313–1341). Then he continued past the Caspian and Aral Seas to Bukhara and Samarkand, where he visited the court of another Mongol khan, Tarmashirin (r. 1331–1334) of the Chagatai Khanate. From there, he journeyed south to Afghanistan, then crossed into India via the mountain passes of the Hindu Kush. In the Rihla, he mentions these mountains and the history of the range in slave trading. He wrote,
After this I proceeded to the city of Barwan, in the road to which is a high mountain, covered with snow and exceedingly cold; they call it the Hindu Kush, that is Hindu-slayer, because most of the slaves brought thither from India die on account of the intenseness of the cold.
Ibn Battuta and his party reached the Indus River on 12 September 1333. From there, he made his way to Delhi and became acquainted with the sultan, Muhammad bin Tughluq.
Muhammad bin Tughluq was renowned as the wealthiest man in the Muslim world at that time. He patronized various scholars, Sufis, qadis, viziers, and other functionaries in order to consolidate his rule. As with Mamluk Egypt, the Tughlaq Dynasty was a rare vestigial example of Muslim rule after a Mongol invasion. On the strength of his years of study in Mecca, Ibn Battuta was appointed a qadi, or judge, by the sultan. However, he found it difficult to enforce Islamic law beyond the sultan's court in Delhi, due to lack of Islamic appeal in India.
It is uncertain by which route Ibn Battuta entered the Indian subcontinent but it is known that he was kidnapped and robbed by rebels on his journey to the Indian coast. He may have entered via the Khyber Pass and Peshawar, or further south. He crossed the Sutlej river near the city of Pakpattan, in modern-day Pakistan, where he paid obeisance at the shrine of Baba Farid, before crossing southwest into Rajput country. From the Rajput kingdom of Sarsatti, Battuta visited Hansi in India, describing it as "among the most beautiful cities, the best constructed and the most populated; it is surrounded with a strong wall, and its founder is said to be one of the great non-Muslim kings, called Tara". Upon his arrival in Sindh, Ibn Battuta mentions the Indian rhinoceros that lived on the banks of the Indus.
The Sultan was erratic even by the standards of the time and for six years Ibn Battuta veered between living the high life of a trusted subordinate and falling under suspicion of treason for a variety of offences. His plan to leave on the pretext of taking another hajj was stymied by the Sultan. The opportunity for Battuta to leave Delhi finally arose in 1341 when an embassy arrived from the Yuan dynasty of China asking for permission to rebuild a Himalayan Buddhist temple popular with Chinese pilgrims.
Ibn Battuta was given charge of the embassy but en route to the coast at the start of the journey to China, he and his large retinue were attacked by a group of bandits. Separated from his companions, he was robbed, kidnapped, and nearly lost his life. Despite this setback, within ten days he had caught up with his group and continued on to Khambhat in the Indian state of Gujarat. From there, they sailed to Calicut (now known as Kozhikode), where Portuguese explorer Vasco da Gama would land two centuries later. While in Calicut, Battuta was the guest of the ruling Zamorin. While Ibn Battuta visited a mosque on shore, a storm arose and one of the ships of his expedition sank. The other ship then sailed without him only to be seized by a local Sumatran king a few months later.
Afraid to return to Delhi and be seen as a failure, he stayed for a time in southern India under the protection of Jamal-ud-Din, ruler of the small but powerful Nawayath sultanate on the banks of the Sharavathi river next to the Arabian Sea. This area is today known as Hosapattana and lies in the Honavar administrative district of Uttara Kannada. Following the overthrow of the sultanate, Ibn Battuta had no choice but to leave India. Although determined to continue his journey to China, he first took a detour to visit the Maldive Islands where he worked as a judge.
He spent nine months on the islands, much longer than he had intended. When he arrived at the capital, Malé, Ibn Battuta did not plan to stay. However, the leaders of the formerly Buddhist nation that had recently converted to Islam were looking for a chief judge, someone who knew Arabic and the Qur'an. To convince him to stay they gave him pearls, gold jewellery, and slaves, while at the same time making it impossible for him to leave by ship. Compelled into staying, he became a chief judge and married into the royal family of Omar I.
Ibn Battuta took on his duties as a judge with keenness and strived to transform local practices to conform to a stricter application of Muslim law. He commanded that men who did not attend Friday prayer be publicly whipped, and that robbers' right hand be cut off. He forbade women from being topless in public, which had previously been the custom. However, these and other strict judgments began to antagonize the island nation's rulers, and involved him in power struggles and political intrigues. Ibn Battuta resigned from his job as chief qadi, although in all likelihood it was inevitable that he would have been dismissed.
Throughout his travels, Ibn Battuta kept close company with women, usually taking a wife whenever he stopped for any length of time at one place, and then divorcing her when he moved on. While in the Maldives, Ibn Battuta took four wives. In his Travels he wrote that in the Maldives the effect of small dowries and female non-mobility combined to, in effect, make a marriage a convenient temporary arrangement for visiting male travellers and sailors.
From the Maldives, he carried on to Sri Lanka and visited Sri Pada and Tenavaram temple. Ibn Battuta's ship almost sank on embarking from Sri Lanka, only for the vessel that came to his rescue to suffer an attack by pirates. Stranded onshore, he worked his way back to the Madurai kingdom in India. Here he spent some time in the court of the short-lived Madurai Sultanate under Ghiyas-ud-Din Muhammad Damghani, from where he returned to the Maldives and boarded a Chinese junk, still intending to reach China and take up his ambassadorial post.
He reached the port of Chittagong in modern-day Bangladesh intending to travel to Sylhet to meet Shah Jalal, who became so renowned that Ibn Battuta, then in Chittagong, made a one-month journey through the mountains of Kamaru near Sylhet to meet him. On his way to Sylhet, Ibn Battuta was greeted by several of Shah Jalal's disciples who had come to assist him on his journey many days before he had arrived. At the meeting in 1345 CE, Ibn Battuta noted that Shah Jalal was tall and lean, fair in complexion and lived by the mosque in a cave, where his only item of value was a goat he kept for milk, butter, and yogurt. He observed that the companions of the Shah Jalal were foreign and known for their strength and bravery. He also mentions that many people would visit the Shah to seek guidance. Ibn Battuta went further north into Assam, then turned around and continued with his original plan.
In 1345, Ibn Battuta traveled to Samudra Pasai Sultanate (called "al-Jawa") in present-day Aceh, Northern Sumatra, after 40 days voyage from Sunur Kawan. He notes in his travel log that the ruler of Samudra Pasai was a pious Muslim named Sultan Al-Malik Al-Zahir Jamal-ad-Din, who performed his religious duties with utmost zeal and often waged campaigns against animists in the region. The island of Sumatra, according to Ibn Battuta, was rich in camphor, areca nut, cloves, and tin.
The madh'hab he observed was Imam Al-Shafi‘i, whose customs were similar to those he had previously seen in coastal India, especially among the Mappila Muslims, who were also followers of Imam Al-Shafi‘i. At that time Samudra Pasai marked the end of Dar al-Islam, because no territory east of this was ruled by a Muslim. Here he stayed for about two weeks in the wooden walled town as a guest of the sultan, and then the sultan provided him with supplies and sent him on his way on one of his own junks to China.
Ibn Battuta first sailed for 21 days to a place called "Mul Jawa" (island of Java or Majapahit Java) which was a center of a Hindu empire. The empire spanned 2 months of travel, and ruled over the country of Qaqula and Qamara. He arrived at the walled city named Qaqula/Kakula, and observed that the city had war junks for pirate raiding and collecting tolls and that elephants were employed for various purposes. He met the ruler of Mul Jawa and stayed as a guest for three days.
Ibn Battuta then sailed to a state called Kaylukari in the land of Tawalisi, where he met Urduja, a local princess. Urduja was a brave warrior, and her people were opponents of the Yuan dynasty. She was described as an "idolater", but could write the phrase Bismillah in Islamic calligraphy. The locations of Kaylukari and Tawalisi are disputed. Kaylukari might referred to Po Klong Garai in Champa (now southern Vietnam), and Urduja might be an aristocrat of Champa or Dai Viet. Filipinos widely believe that Kaylukari was in present-day Pangasinan Province of the Philippines. Their opposition to the Mongols might indicate 2 possible locations: Japan and Java (Majapahit). In modern times, Urduja has been featured in Filipino textbooks and films as a national heroine. Numerous other locations have been proposed, ranging from Java to somewhere in Guangdong Province, China. However, Sir Henry Yule and William Henry Scott consider both Tawalisi and Urduja to be entirely fictitious. (See Tawalisi for details.) From Kaylukari, Ibn Battuta finally reached Quanzhou in Fujian Province, China.
In the year 1345, Ibn Battuta arrived at Quanzhou in China's Fujian province, then under the rule of the Mongol-led Yuan dynasty. One of the first things he noted was that Muslims referred to the city as "Zaitun" (meaning olive), but Ibn Battuta could not find any olives anywhere. He mentioned local artists and their mastery in making portraits of newly arrived foreigners; these were for security purposes. Ibn Battuta praised the craftsmen and their silk and porcelain; as well as fruits such as plums and watermelons and the advantages of paper money.
He described the manufacturing process of large ships in the city of Quanzhou. He also mentioned Chinese cuisine and its usage of animals such as frogs, pigs, and even dogs which were sold in the markets, and noted that the chickens in China were larger than those in the west. Scholars however have pointed out numerous errors given in Ibn Battuta's account of China, for example confusing the Yellow River with the Grand Canal and other waterways, as well as believing that porcelain was made from coal.
In Quanzhou, Ibn Battuta was welcomed by the head of the local Muslim merchants (possibly a fānzhǎng or "Leader of Foreigners" simplified Chinese: 番长; traditional Chinese: 番長; pinyin: fānzhǎng) and Sheikh al-Islam (Imam), who came to meet him with flags, drums, trumpets, and musicians. Ibn Battuta noted that the Muslim populace lived within a separate portion in the city where they had their own mosques, bazaars, and hospitals. In Quanzhou, he met two prominent Iranians, Burhan al-Din of Kazerun and Sharif al-Din from Tabriz (both of whom were influential figures noted in the Yuan History as "A-mi-li-ding" and "Sai-fu-ding", respectively). While in Quanzhou he ascended the "Mount of the Hermit" and briefly visited a well-known Taoist monk in a cave.
He then travelled south along the Chinese coast to Guangzhou, where he lodged for two weeks with one of the city's wealthy merchants.
From Guangzhou he went north to Quanzhou and then proceeded to the city of Fuzhou, where he took up residence with Zahir al-Din and met Kawam al-Din and a fellow countryman named Al-Bushri of Ceuta, who had become a wealthy merchant in China. Al-Bushri accompanied Ibn Battuta northwards to Hangzhou and paid for the gifts that Ibn Battuta would present to the Emperor Huizong of Yuan.
Ibn Battuta said that Hangzhou was one of the largest cities he had ever seen, and he noted its charm, describing that the city sat on a beautiful lake surrounded by gentle green hills. He mentions the city's Muslim quarter and resided as a guest with a family of Egyptian origin. During his stay at Hangzhou he was particularly impressed by the large number of well-crafted and well-painted Chinese wooden ships, with coloured sails and silk awnings, assembling in the canals. Later he attended a banquet of the Yuan administrator of the city named Qurtai, who according to Ibn Battuta, was very fond of the skills of local Chinese conjurers. Ibn Battuta also mentions locals who worshipped a solar deity.
He described floating through the Grand Canal on a boat watching crop fields, orchids, merchants in black silk, and women in flowered silk and priests also in silk. In Beijing, Ibn Battuta referred to himself as the long-lost ambassador from the Delhi Sultanate and was invited to the Yuan imperial court of Emperor Huizong (who according to Ibn Battuta was worshipped by some people in China). Ibn Batutta noted that the palace of Khanbaliq was made of wood and that the ruler's "head wife" (Empress Qi) held processions in her honour.
Ibn Battuta also wrote he had heard of "the rampart of Yajuj and Majuj" that was "sixty days' travel" from the city of Zeitun (Quanzhou); Hamilton Alexander Rosskeen Gibb notes that Ibn Battuta believed that the Great Wall of China was built by Dhul-Qarnayn to contain Gog and Magog as mentioned in the Quran. However, Ibn Battuta, who asked about the wall in China, could find no one who had either seen it or knew of anyone who had seen it.
Ibn Battuta travelled from Beijing to Hangzhou, and then proceeded to Fuzhou. Upon his return to Quanzhou, he soon boarded a Chinese junk owned by the Sultan of Samudera Pasai Sultanate heading for Southeast Asia, whereupon Ibn Battuta was unfairly charged a hefty sum by the crew and lost much of what he had collected during his stay in China.
Battuta claimed that the Emperor Huizong of Yuan had interred with him in his grave six slave soldiers and four girl slaves. Silver, gold, weapons, and carpets were put into the grave.
After returning to Quanzhou in 1346, Ibn Battuta began his journey back to Morocco. In Kozhikode, he once again considered throwing himself at the mercy of Muhammad bin Tughluq in Delhi, but thought better of it and decided to carry on to Mecca. On his way to Basra he passed through the Strait of Hormuz, where he learned that Abu Sa'id, last ruler of the Ilkhanate Dynasty had died in Iran. Abu Sa'id's territories had subsequently collapsed due to a fierce civil war between the Iranians and Mongols.
In 1348, Ibn Battuta arrived in Damascus with the intention of retracing the route of his first hajj. He then learned that his father had died 15 years earlier and death became the dominant theme for the next year or so. The Black Death had struck and he stopped in Homs as the plague spread through Syria, Palestine, and Arabia. He heard of terrible death tolls in Gaza, but returned to Damascus that July where the death toll had reached 2,400 victims each day. When he stopped in Gaza he found it was depopulated, and in Egypt he stayed at Abu Sir. Reportedly deaths in Cairo had reached levels of 1,100 each day. He made hajj to Mecca then he decided to return to Morocco, nearly a quarter of a century after leaving home. On the way he made one last detour to Sardinia, then in 1349, returned to Tangier by way of Fez, only to discover that his mother had also died a few months before.
After a few days in Tangier, Ibn Battuta set out for a trip to the Muslim-controlled territory of al-Andalus on the Iberian Peninsula. King Alfonso XI of Castile and León had threatened to attack Gibraltar, so in 1350, Ibn Battuta joined a group of Muslims leaving Tangier with the intention of defending the port. By the time he arrived, the Black Death had killed Alfonso and the threat of invasion had receded, so he turned the trip into a sight-seeing tour ending up in Granada.
After his departure from al-Andalus he decided to travel through Morocco. On his return home, he stopped for a while in Marrakech, which was almost a ghost town following the recent plague and the transfer of the capital to Fez.
In the autumn of 1351, Ibn Battuta left Fez and made his way to the town of Sijilmasa on the northern edge of the Sahara in present-day Morocco. There he bought a number of camels and stayed for four months. He set out again with a caravan in February 1352 and after 25 days arrived at the dry salt lake bed of Taghaza with its salt mines. All of the local buildings were made from slabs of salt by the slaves of the Masufa tribe, who cut the salt in thick slabs for transport by camel. Taghaza was a commercial centre and awash with Malian gold, though Ibn Battuta did not form a favourable impression of the place, recording that it was plagued by flies and the water was brackish.
After a ten-day stay in Taghaza, the caravan set out for the oasis of Tasarahla (probably Bir al-Ksaib) where it stopped for three days in preparation for the last and most difficult leg of the journey across the vast desert. From Tasarahla, a Masufa scout was sent ahead to the oasis town of Oualata, where he arranged for water to be transported a distance of four days travel where it would meet the thirsty caravan. Oualata was the southern terminus of the trans-Saharan trade route and had recently become part of the Mali Empire. Altogether, the caravan took two months to cross the 1,600 km (990 mi) of desert from Sijilmasa.
From there, Ibn Battuta travelled southwest along a river he believed to be the Nile (it was actually the river Niger), until he reached the capital of the Mali Empire. There he met Mansa Suleyman, king since 1341. Ibn Battuta disapproved of the fact that female slaves, servants, and even the daughters of the sultan went about exposing parts of their bodies not befitting a Muslim. He wrote in his Rihla that black Africans were characterised by "ill manners" and "contempt for white men", and that he "was long astonished at their feeble intellect and their respect for mean things." He left the capital in February accompanied by a local Malian merchant and journeyed overland by camel to Timbuktu. Though in the next two centuries it would become the most important city in the region, at that time it was a small city and relatively unimportant. It was during this journey that Ibn Battuta first encountered a hippopotamus. The animals were feared by the local boatmen and hunted with lances to which strong cords were attached. After a short stay in Timbuktu, Ibn Battuta journeyed down the Niger to Gao in a canoe carved from a single tree. At the time Gao was an important commercial center.
After spending a month in Gao, Ibn Battuta set off with a large caravan for the oasis of Takedda. On his journey across the desert, he received a message from the Sultan of Morocco commanding him to return home. He set off for Sijilmasa in September 1353, accompanying a large caravan transporting 600 female slaves, and arrived back in Morocco early in 1354.
Ibn Battuta's itinerary gives scholars a glimpse as to when Islam first began to spread into the heart of west Africa.
After returning home from his travels in 1354, and at the suggestion of the Marinid ruler of Morocco, Abu Inan Faris, Ibn Battuta dictated an account in Arabic of his journeys to Ibn Juzayy, a scholar whom he had previously met in Granada. The account is the only source for Ibn Battuta's adventures. The full title of the manuscript may be translated as A Masterpiece to Those Who Contemplate the Wonders of Cities and the Marvels of Travelling (تحفة النظار في غرائب الأمصار وعجائب الأسفار, Tuḥfat an-Nuẓẓār fī Gharāʾib al-Amṣār wa ʿAjāʾib al-Asfār). However, it is often simply referred to as The Travels (الرحلة, Rihla), in reference to a standard form of Arabic literature.
There is no indication that Ibn Battuta made any notes or had any journal during his twenty-nine years of travelling. When he came to dictate an account of his experiences he had to rely on memory and manuscripts produced by earlier travellers. Ibn Juzayy did not acknowledge his sources and presented some of the earlier descriptions as Ibn Battuta's own observations. When describing Damascus, Mecca, Medina, and some other places in the Middle East, he clearly copied passages from the account by the Andalusian Ibn Jubayr which had been written more than 150 years earlier. Similarly, most of Ibn Juzayy's descriptions of places in Palestine were copied from an account by the 13th-century traveller Muhammad al-Abdari.
Scholars do not believe that Ibn Battuta visited all the places he described and argue that in order to provide a comprehensive description of places in the Muslim world, he relied on hearsay evidence and made use of accounts by earlier travellers. For example, it is considered very unlikely that Ibn Battuta made a trip up the Volga River from New Sarai to visit Bolghar and there are serious doubts about a number of other journeys such as his trip to Sana'a in Yemen, his journey from Balkh to Bistam in Khorasan, and his trip around Anatolia.
Ibn Battuta's claim that a Maghrebian called "Abu'l Barakat the Berber" converted the Maldives to Islam is contradicted by an entirely different story which says that the Maldives were converted to Islam after miracles were performed by a Tabrizi named Maulana Shaikh Yusuf Shams-ud-din according to the Tarikh, the official history of the Maldives.
Some scholars have also questioned whether he really visited China. Ibn Battuta may have plagiarized entire sections of his descriptions of China lifted from works by other authors like "Masalik al-absar fi mamalik al-amsar" by Shihab al-Umari, Sulaiman al-Tajir, and possibly from Al Juwayni, Rashid al din, and an Alexander romance. Furthermore, Ibn Battuta's description and Marco Polo's writings share extremely similar sections and themes, with some of the same commentary, e.g. it is unlikely that the 3rd Caliph Uthman ibn Affan had someone with the identical name in China who was encountered by Ibn Battuta.
However, even if the Rihla is not fully based on what its author personally witnessed, it provides an important account of much of the 14th-century world. Concubines were used by Ibn Battuta such as in Delhi. He wedded several women, divorced at least some of them, and in Damascus, Malabar, Delhi, Bukhara, and the Maldives had children by them or by concubines. Ibn Battuta insulted Greeks as "enemies of Allah", drunkards and "swine eaters", while at the same time in Ephesus he purchased and used a Greek girl who was one of his many slave girls in his "harem" through Byzantium, Khorasan, Africa, and Palestine. It was two decades before he again returned to find out what happened to one of his wives and child in Damascus.
Ibn Battuta often experienced culture shock in regions he visited where the local customs of recently converted peoples did not fit in with his orthodox Muslim background. Among the Turks and Mongols, he was astonished at the freedom and respect enjoyed by women and remarked that on seeing a Turkish couple in a bazaar one might assume that the man was the woman's servant when he was in fact her husband. He also felt that dress customs in the Maldives, and some sub-Saharan regions in Africa were too revealing.
Little is known about Ibn Battuta's life after completion of his Rihla in 1355. He was appointed a judge in Morocco and died in 1368 or 1369.
Ibn Battuta's work was unknown outside the Muslim world until the beginning of the 19th century, when the German traveller-explorer Ulrich Jasper Seetzen (1767–1811) acquired a collection of manuscripts in the Middle East, among which was a 94-page volume containing an abridged version of Ibn Juzayy's text. Three extracts were published in 1818 by the German orientalist Johann Kosegarten. A fourth extract was published the following year. French scholars were alerted to the initial publication by a lengthy review published in the Journal de Savants by the orientalist Silvestre de Sacy.
Three copies of another abridged manuscript were acquired by the Swiss traveller Johann Burckhardt and bequeathed to the University of Cambridge. He gave a brief overview of their content in a book published posthumously in 1819. The Arabic text was translated into English by the orientalist Samuel Lee and published in London in 1829.
In the 1830s, during the French occupation of Algeria, the Bibliothèque Nationale (BNF) in Paris acquired five manuscripts of Ibn Battuta's travels, in which two were complete. One manuscript containing just the second part of the work is dated 1356 and is believed to be Ibn Juzayy's autograph. The BNF manuscripts were used in 1843 by the Irish-French orientalist Baron de Slane to produce a translation into French of Ibn Battuta's visit to the Sudan. They were also studied by the French scholars Charles Defrémery and Beniamino Sanguinetti. Beginning in 1853 they published a series of four volumes containing a critical edition of the Arabic text together with a translation into French. In their introduction Defrémery and Sanguinetti praised Lee's annotations but were critical of his translation which they claimed lacked precision, even in straightforward passages.
In 1929, exactly a century after the publication of Lee's translation, the historian and orientalist Hamilton Gibb published an English translation of selected portions of Defrémery and Sanguinetti's Arabic text. Gibb had proposed to the Hakluyt Society in 1922 that he should prepare an annotated translation of the entire Rihla into English. His intention was to divide the translated text into four volumes, each volume corresponding to one of the volumes published by Defrémery and Sanguinetti. The first volume was not published until 1958. Gibb died in 1971, having completed the first three volumes. The fourth volume was prepared by Charles Beckingham and published in 1994. Defrémery and Sanguinetti's printed text has now been translated into number of other languages.
German Islamic studies scholar Ralph Elger views Battuta's travel account as an important literary work but doubts the historicity of much of its content, which he suspects to be a work of fiction compiled and inspired from other contemporary travel reports. Various other scholars have raised similar doubts.
In 1987, Ross E. Dunn similarly expressed doubts that any evidence would be found to support the narrative of the Rihla, but in 2010, Tim Mackintosh-Smith completed a multi-volume field study in dozens of the locales mentioned in the Rihla, in which he reports on previously unknown manuscripts of Islamic law kept in the archives of Al-Azhar University in Cairo that were copied by Ibn Battuta in Damascus in 1326, corroborating the date in the Rihla of his sojourn in Syria.
The largest themed mall in Dubai, UAE, the Ibn Battuta Mall is named for him and features both areas designed to recreate the exotic lands he visited on his travels and statuary tableaus depicting scenes from his life history.
A giant semblance of Battuta, alongside two others from the history of Arab exploration, the geographer and historian Al Bakri and the navigator and cartographer Ibn Majid, is displayed at the Mobility pavilion at Expo 2020 in Dubai in a section of the exhibition designed by Weta Workshop.
Tangier Ibn Battouta Airport is an international airport located in his hometown of Tangier, Morocco. | [
{
"paragraph_id": 0,
"text": "Abu Abdullah Muhammad ibn Battutah (/ˌɪbən bætˈtuːtɑː/; 24 February 1304 – 1368/1369), commonly known as Ibn Battuta, was a Maghrebi traveller, explorer and scholar. Over a period of thirty years from 1325 to 1354, Ibn Battuta visited most of North Africa, the Middle East, East Africa, Central Asia, South Asia, Southeast Asia, China, the Iberian Peninsula, and West Africa. Near the end of his life, he dictated an account of his journeys, titled A Gift to Those Who Contemplate the Wonders of Cities and the Marvels of Travelling, but commonly known as The Rihla.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Ibn Battuta travelled more than any other explorer in pre-modern history, totalling around 117,000 km (73,000 mi), surpassing Zheng He with about 50,000 km (31,000 mi) and Marco Polo with 24,000 km (15,000 mi). There have been doubts over the historicity of some of Ibn Battuta's travels, particularly as they reach farther East.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ibn Battuta is a patronymic literally meaning \"son of the duckling\". His most common full name is given as Abu Abdullah Muhammad ibn Battuta. In his travelogue, the Rihla, he gives his full name as Shams al-Din Abu’Abdallah Muhammad ibn’Abdallah ibn Muhammad ibn Ibrahim ibn Muhammad ibn Yusuf Lawati al-Tanji ibn Battuta.",
"title": "Name"
},
{
"paragraph_id": 3,
"text": "All that is known about Ibn Battuta's life comes from the autobiographical information included in the account of his travels, which records that he was of Berber descent, born into a family of Islamic legal scholars in Tangier, known as qadis in the Muslim tradition in Morocco, on 24 February 1304, during the reign of the Marinid dynasty. His family belonged to a Berber tribe known as the Lawata. As a young man, he would have studied at a Sunni Maliki madhhab (Islamic jurisprudence school), the dominant form of education in North Africa at that time. Maliki Muslims requested Ibn Battuta serve as their religious judge, as he was from an area where it was practised.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "On 2 Rajab in the Muslim year 725 Anno Hegirae (14 June 1325 Anno Domini on the Christian calendar), at the age of twenty-one, Ibn Battuta set off from his home town on a hajj, or pilgrimage, to Mecca, a journey that would ordinarily take sixteen months. He was eager to learn more about far-away lands and craved adventure. No one knew that he would not return to Morocco again for 24 years.",
"title": "Journeys"
},
{
"paragraph_id": 5,
"text": "I set out alone, having neither fellow-traveller in whose companionship I might find cheer, nor caravan whose part I might join, but swayed by an overmastering impulse within me and a desire long-cherished in my bosom to visit these illustrious sanctuaries. So I braced my resolution to quit my dear ones, female and male, and forsook my home as birds forsake their nests. My parents being yet in the bonds of life, it weighed sorely upon me to part from them, and both they and I were afflicted with sorrow at this separation.",
"title": "Journeys"
},
{
"paragraph_id": 6,
"text": "He travelled to Mecca overland, following the North African coast across the sultanates of Abd al-Wadid and Hafsid. The route took him through Tlemcen, Béjaïa, and then Tunis, where he stayed for two months. For safety, Ibn Battuta usually joined a caravan to reduce the risk of being robbed. He took a bride in the town of Sfax, but soon left her due to a dispute with the father. That was the first in a series of marriages that would feature in his travels.",
"title": "Journeys"
},
{
"paragraph_id": 7,
"text": "In the early spring of 1326, after a journey of over 3,500 km (2,200 mi), Ibn Battuta arrived at the port of Alexandria, at the time part of the Bahri Mamluk empire. He met two ascetic pious men in Alexandria. One was Sheikh Burhanuddin, who is supposed to have foretold the destiny of Ibn Battuta as a world traveller and told him, \"It seems to me that you are fond of foreign travel. You must visit my brother Fariduddin in India, Rukonuddin in Sind, and Burhanuddin in China. Convey my greetings to them.\" Another pious man Sheikh Murshidi interpreted the meaning of a dream of Ibn Battuta that he was meant to be a world traveller.",
"title": "Journeys"
},
{
"paragraph_id": 8,
"text": "He spent several weeks visiting sites in the area, and then headed inland to Cairo, the capital of the Mamluk Sultanate and an important city. After spending about a month in Cairo, he embarked on the first of many detours within the relative safety of Mamluk territory. Of the three usual routes to Mecca, Ibn Battuta chose the least-travelled, which involved a journey up the Nile valley, then east to the Red Sea port of Aydhab. Upon approaching the town, however, a local rebellion forced him to turn back.",
"title": "Journeys"
},
{
"paragraph_id": 9,
"text": "Ibn Battuta returned to Cairo and took a second side trip, this time to Mamluk-controlled Damascus. During his first trip he had encountered a holy man who prophesied that he would only reach Mecca by travelling through Syria. The diversion held an added advantage; because of the holy places that lay along the way, including Hebron, Jerusalem, and Bethlehem, the Mamluk authorities spared no efforts in keeping the route safe for pilgrims. Without this help many travellers would be robbed and murdered.",
"title": "Journeys"
},
{
"paragraph_id": 10,
"text": "After spending the Muslim month of Ramadan, during August, in Damascus, he joined a caravan travelling the 1,300 km (810 mi) south to Medina, site of the Mosque of the Islamic prophet Muhammad. After four days in the town, he journeyed on to Mecca while visiting holy sites along the way; upon his arrival to Mecca he completed his first pilgrimage, in November, and he took the honorific status of El-Hajji. Rather than returning home, Ibn Battuta decided to continue travelling, choosing as his next destination the Ilkhanate, a Mongol Khanate, to the northeast.",
"title": "Journeys"
},
{
"paragraph_id": 11,
"text": "On 17 November 1326, following a month spent in Mecca, Ibn Battuta joined a large caravan of pilgrims returning to Iraq across the Arabian Peninsula. The group headed north to Medina and then, travelling at night, turned northeast across the Najd plateau to Najaf, on a journey that lasted about two weeks. In Najaf, he visited the mausoleum of Ali, the Fourth Caliph.",
"title": "Journeys"
},
{
"paragraph_id": 12,
"text": "Then, instead of continuing to Baghdad with the caravan, Ibn Battuta started a six-month detour that took him into Iran. From Najaf, he journeyed to Wasit, then followed the river Tigris south to Basra. His next destination was the town of Isfahan across the Zagros Mountains in Iran. He then headed south to Shiraz, a large, flourishing city spared the destruction wrought by Mongol invaders on many more northerly towns. Finally, he returned across the mountains to Baghdad, arriving there in June 1327. Parts of the city were still ruined from the damage inflicted by Hulagu Khan's invading army in 1258.",
"title": "Journeys"
},
{
"paragraph_id": 13,
"text": "In Baghdad, he found Abu Sa'id, the last Mongol ruler of the unified Ilkhanate, leaving the city and heading north with a large retinue. Ibn Battuta joined the royal caravan for a while, then turned north on the Silk Road to Tabriz, the first major city in the region to open its gates to the Mongols and by then an important trading centre as most of its nearby rivals had been razed by the Mongol invaders.",
"title": "Journeys"
},
{
"paragraph_id": 14,
"text": "Ibn Battuta left again for Baghdad, probably in July, but first took an excursion northwards along the river Tigris. He visited Mosul, where he was the guest of the Ilkhanate governor, and then the towns of Cizre (Jazirat ibn 'Umar) and Mardin in modern-day Turkey. At a hermitage on a mountain near Sinjar, he met a Kurdish mystic who gave him some silver coins. Once back in Mosul, he joined a \"feeder\" caravan of pilgrims heading south to Baghdad, where they would meet up with the main caravan that crossed the Arabian Desert to Mecca. Ill with diarrhoea, he arrived in the city weak and exhausted for his second hajj.",
"title": "Journeys"
},
{
"paragraph_id": 15,
"text": "Ibn Battuta remained in Mecca for some time (the Rihla suggests about three years, from September 1327 until autumn 1330). Problems with chronology, however, lead commentators to suggest that he may have left after the 1328 hajj.",
"title": "Journeys"
},
{
"paragraph_id": 16,
"text": "After the hajj in either 1328 or 1330, he made his way to the port of Jeddah on the Red Sea coast. From there he followed the coast in a series of boats (known as a jalbah, these were small craft made of wooden planks sewn together, lacking an established phrase) making slow progress against the prevailing south-easterly winds. Once in Yemen he visited Zabīd and later the highland town of Ta'izz, where he met the Rasulid dynasty king (Malik) Mujahid Nur al-Din Ali. Ibn Battuta also mentions visiting Sana'a, but whether he actually did so is doubtful. In all likelihood, he went directly from Ta'izz to the important trading port of Aden, arriving around the beginning of 1329 or 1331.",
"title": "Journeys"
},
{
"paragraph_id": 17,
"text": "From Aden, Ibn Battuta embarked on a ship heading for Zeila on the coast of Somalia. He then moved on to Cape Guardafui further down the Somali seaboard, spending about a week in each location. Later he would visit Mogadishu, the then pre-eminent city of the \"Land of the Berbers\" (بلد البربر Balad al-Barbar, the medieval Arabic term for the Horn of Africa).",
"title": "Journeys"
},
{
"paragraph_id": 18,
"text": "When Ibn Battuta arrived in 1332, Mogadishu stood at the zenith of its prosperity. He described it as \"an exceedingly large city\" with many rich merchants, noted for its high-quality fabric that was exported to other countries, including Egypt. Battuta added that the city was ruled by a Somali Sultan, Abu Bakr ibn Shaikh 'Umar. He noted that Sultan Abu Bakr had dark skin complexion and spoke in his native tongue (Somali), but was also fluent in Arabic. The Sultan also had a retinue of wazirs (ministers), legal experts, commanders, royal eunuchs, and other officials at his beck and call.",
"title": "Journeys"
},
{
"paragraph_id": 19,
"text": "Ibn Battuta continued by ship south to the Swahili coast, a region then known in Arabic as the Bilad al-Zanj (\"Land of the Zanj\") with an overnight stop at the island town of Mombasa. Although relatively small at the time, Mombasa would become important in the following century. After a journey along the coast, Ibn Battuta next arrived in the island town of Kilwa in present-day Tanzania, which had become an important transit centre of the gold trade. He described the city as \"one of the finest and most beautifully built towns; all the buildings are of wood, and the houses are roofed with dīs reeds\".",
"title": "Journeys"
},
{
"paragraph_id": 20,
"text": "Ibn Battuta recorded his visit to the Kilwa Sultanate in 1330, and commented favourably on the humility and religion of its ruler, Sultan al-Hasan ibn Sulaiman, a descendant of the legendary Ali ibn al-Hassan Shirazi. He further wrote that the authority of the Sultan extended from Malindi in the north to Inhambane in the south and was particularly impressed by the planning of the city, believing it to be the reason for Kilwa's success along the coast. During this period, he described the construction of the Palace of Husuni Kubwa and a significant extension to the Great Mosque of Kilwa, which was made of coral stones and was the largest mosque of its kind. With a change in the monsoon winds, Ibn Battuta sailed back to Arabia, first to Oman and the Strait of Hormuz then on to Mecca for the hajj of 1330 (or 1332).",
"title": "Journeys"
},
{
"paragraph_id": 21,
"text": "After his third pilgrimage to Mecca, Ibn Battuta decided to seek employment with the Sultan of Delhi, Muhammad bin Tughluq. In the autumn of 1330 (or 1332), he set off for the Seljuk controlled territory of Anatolia to take an overland route to India. He crossed the Red Sea and the Eastern Desert to reach the Nile valley and then headed north to Cairo. From there he crossed the Sinai Peninsula to Palestine and then travelled north again through some of the towns that he had visited in 1326. From the Syrian port of Latakia, a Genoese ship took him (and his companions) to Alanya on the southern coast of modern-day Turkey.",
"title": "Journeys"
},
{
"paragraph_id": 22,
"text": "He then journeyed westwards along the coast to the port of Antalya. In the town he met members of one of the semi-religious fityan associations. These were a feature of most Anatolian towns in the 13th and 14th centuries. The members were young artisans and had at their head a leader with the title of Akhil. The associations specialised in welcoming travellers. Ibn Battuta was very impressed with the hospitality that he received and would later stay in their hospices in more than 25 towns in Anatolia. From Antalya Ibn Battuta headed inland to Eğirdir which was the capital of the Hamidids. He spent Ramadan (June 1331 or May 1333) in the city.",
"title": "Journeys"
},
{
"paragraph_id": 23,
"text": "From this point his itinerary across Anatolia in the Rihla becomes confused. Ibn Battuta describes travelling westwards from Eğirdir to Milas and then skipping 420 km (260 mi) eastward past Eğirdir to Konya. He then continues travelling in an easterly direction, reaching Erzurum from where he skips 1,160 km (720 mi) back to Birgi which lies north of Milas. Historians believe that Ibn Battuta visited a number of towns in central Anatolia, but not in the order in which he describes.",
"title": "Journeys"
},
{
"paragraph_id": 24,
"text": "When Ibn Battuta arrived in Iznik, it had just been conquered by Orhan, Sultan of the nascent Ottoman Empire. Orhan was away and his wife was in command of the nearby stationed soldiers, Ibn Battuta gave this account of Orhan's wife: \"A pious and excellent woman. She treated me honourably, gave me hospitality and sent gifts.\"",
"title": "Journeys"
},
{
"paragraph_id": 25,
"text": "Ibn Battuta's account of Orhan:",
"title": "Journeys"
},
{
"paragraph_id": 26,
"text": "The greatest of the kings of the Turkmens and the richest in wealth, lands and military forces. Of fortresses, he possesses nearly a hundred, and for most of his time, he is continually engaged in making a round of them, staying in each fortress for some days to put it in good order and examine its condition. It is said that he has never stayed for a whole month in any one town. He also fights with the infidels continually and keeps them under siege.",
"title": "Journeys"
},
{
"paragraph_id": 27,
"text": "Ibn Battuta had also visited Bursa which at the time was the capital of the Ottoman Beylik, he described Bursa as \"a great and important city with fine bazaars and wide streets, surrounded on all sides with gardens and running springs\".",
"title": "Journeys"
},
{
"paragraph_id": 28,
"text": "He also visited the Beylik of Aydin. Ibn Battuta stated that the ruler of the Beylik of Aydin had twenty Greek slaves at the entrance of his palace and Ibn Battuta was given a Greek slave as a gift. His visit to Anatolia was the first time in his travels he acquired a servant; the ruler of Aydin gifted him his first slave. Later, he purchased a young Greek girl for 40 dinars in Ephesus, was gifted another slave in Izmir by the Sultan, and purchased a second girl in Balikesir. The conspicuous evidence of his wealth and prestige continued to grow.",
"title": "Journeys"
},
{
"paragraph_id": 29,
"text": "From Sinope he took a sea route to the Crimean Peninsula, arriving in the Golden Horde realm. He went to the port town of Azov, where he met with the emir of the Khan, then to the large and rich city of Majar. He left Majar to meet with Uzbeg Khan's travelling court (Orda), which was at the time near Mount Beshtau. From there he made a journey to Bolghar, which became the northernmost point he reached, and noted its unusually short nights in summer (by the standards of the subtropics). Then he returned to the Khan's court and with it moved to Astrakhan.",
"title": "Journeys"
},
{
"paragraph_id": 30,
"text": "Ibn Battuta recorded that while in Bolghar he wanted to travel further north into the land of darkness. The land is snow-covered throughout (northern Siberia) and the only means of transport is dog-drawn sled. There lived a mysterious people who were reluctant to show themselves. They traded with southern people in a peculiar way. Southern merchants brought various goods and placed them in an open area on the snow in the night, then returned to their tents. Next morning they came to the place again and found their merchandise taken by the mysterious people, but in exchange they found fur-skins which could be used for making valuable coats, jackets, and other winter garments. The trade was done between merchants and the mysterious people without seeing each other. As Ibn Battuta was not a merchant and saw no benefit of going there he abandoned the travel to this land of darkness.",
"title": "Journeys"
},
{
"paragraph_id": 31,
"text": "When they reached Astrakhan, Öz Beg Khan had just given permission for one of his pregnant wives, Princess Bayalun, a daughter of Byzantine emperor Andronikos III Palaiologos, to return to her home city of Constantinople to give birth. Ibn Battuta talked his way into this expedition, which would be his first beyond the boundaries of the Islamic world.",
"title": "Journeys"
},
{
"paragraph_id": 32,
"text": "Arriving in Constantinople towards the end of 1332 (or 1334), he met the Byzantine emperor Andronikos III Palaiologos. He visited the great church of Hagia Sophia and spoke with an Eastern Orthodox priest about his travels in the city of Jerusalem. After a month in the city, Ibn Battuta returned to Astrakhan, then arrived in the capital city Sarai al-Jadid and reported the accounts of his travels to Sultan Öz Beg Khan (r. 1313–1341). Then he continued past the Caspian and Aral Seas to Bukhara and Samarkand, where he visited the court of another Mongol khan, Tarmashirin (r. 1331–1334) of the Chagatai Khanate. From there, he journeyed south to Afghanistan, then crossed into India via the mountain passes of the Hindu Kush. In the Rihla, he mentions these mountains and the history of the range in slave trading. He wrote,",
"title": "Journeys"
},
{
"paragraph_id": 33,
"text": "After this I proceeded to the city of Barwan, in the road to which is a high mountain, covered with snow and exceedingly cold; they call it the Hindu Kush, that is Hindu-slayer, because most of the slaves brought thither from India die on account of the intenseness of the cold.",
"title": "Journeys"
},
{
"paragraph_id": 34,
"text": "Ibn Battuta and his party reached the Indus River on 12 September 1333. From there, he made his way to Delhi and became acquainted with the sultan, Muhammad bin Tughluq.",
"title": "Journeys"
},
{
"paragraph_id": 35,
"text": "Muhammad bin Tughluq was renowned as the wealthiest man in the Muslim world at that time. He patronized various scholars, Sufis, qadis, viziers, and other functionaries in order to consolidate his rule. As with Mamluk Egypt, the Tughlaq Dynasty was a rare vestigial example of Muslim rule after a Mongol invasion. On the strength of his years of study in Mecca, Ibn Battuta was appointed a qadi, or judge, by the sultan. However, he found it difficult to enforce Islamic law beyond the sultan's court in Delhi, due to lack of Islamic appeal in India.",
"title": "Journeys"
},
{
"paragraph_id": 36,
"text": "It is uncertain by which route Ibn Battuta entered the Indian subcontinent but it is known that he was kidnapped and robbed by rebels on his journey to the Indian coast. He may have entered via the Khyber Pass and Peshawar, or further south. He crossed the Sutlej river near the city of Pakpattan, in modern-day Pakistan, where he paid obeisance at the shrine of Baba Farid, before crossing southwest into Rajput country. From the Rajput kingdom of Sarsatti, Battuta visited Hansi in India, describing it as \"among the most beautiful cities, the best constructed and the most populated; it is surrounded with a strong wall, and its founder is said to be one of the great non-Muslim kings, called Tara\". Upon his arrival in Sindh, Ibn Battuta mentions the Indian rhinoceros that lived on the banks of the Indus.",
"title": "Journeys"
},
{
"paragraph_id": 37,
"text": "The Sultan was erratic even by the standards of the time and for six years Ibn Battuta veered between living the high life of a trusted subordinate and falling under suspicion of treason for a variety of offences. His plan to leave on the pretext of taking another hajj was stymied by the Sultan. The opportunity for Battuta to leave Delhi finally arose in 1341 when an embassy arrived from the Yuan dynasty of China asking for permission to rebuild a Himalayan Buddhist temple popular with Chinese pilgrims.",
"title": "Journeys"
},
{
"paragraph_id": 38,
"text": "Ibn Battuta was given charge of the embassy but en route to the coast at the start of the journey to China, he and his large retinue were attacked by a group of bandits. Separated from his companions, he was robbed, kidnapped, and nearly lost his life. Despite this setback, within ten days he had caught up with his group and continued on to Khambhat in the Indian state of Gujarat. From there, they sailed to Calicut (now known as Kozhikode), where Portuguese explorer Vasco da Gama would land two centuries later. While in Calicut, Battuta was the guest of the ruling Zamorin. While Ibn Battuta visited a mosque on shore, a storm arose and one of the ships of his expedition sank. The other ship then sailed without him only to be seized by a local Sumatran king a few months later.",
"title": "Journeys"
},
{
"paragraph_id": 39,
"text": "Afraid to return to Delhi and be seen as a failure, he stayed for a time in southern India under the protection of Jamal-ud-Din, ruler of the small but powerful Nawayath sultanate on the banks of the Sharavathi river next to the Arabian Sea. This area is today known as Hosapattana and lies in the Honavar administrative district of Uttara Kannada. Following the overthrow of the sultanate, Ibn Battuta had no choice but to leave India. Although determined to continue his journey to China, he first took a detour to visit the Maldive Islands where he worked as a judge.",
"title": "Journeys"
},
{
"paragraph_id": 40,
"text": "He spent nine months on the islands, much longer than he had intended. When he arrived at the capital, Malé, Ibn Battuta did not plan to stay. However, the leaders of the formerly Buddhist nation that had recently converted to Islam were looking for a chief judge, someone who knew Arabic and the Qur'an. To convince him to stay they gave him pearls, gold jewellery, and slaves, while at the same time making it impossible for him to leave by ship. Compelled into staying, he became a chief judge and married into the royal family of Omar I.",
"title": "Journeys"
},
{
"paragraph_id": 41,
"text": "Ibn Battuta took on his duties as a judge with keenness and strived to transform local practices to conform to a stricter application of Muslim law. He commanded that men who did not attend Friday prayer be publicly whipped, and that robbers' right hand be cut off. He forbade women from being topless in public, which had previously been the custom. However, these and other strict judgments began to antagonize the island nation's rulers, and involved him in power struggles and political intrigues. Ibn Battuta resigned from his job as chief qadi, although in all likelihood it was inevitable that he would have been dismissed.",
"title": "Journeys"
},
{
"paragraph_id": 42,
"text": "Throughout his travels, Ibn Battuta kept close company with women, usually taking a wife whenever he stopped for any length of time at one place, and then divorcing her when he moved on. While in the Maldives, Ibn Battuta took four wives. In his Travels he wrote that in the Maldives the effect of small dowries and female non-mobility combined to, in effect, make a marriage a convenient temporary arrangement for visiting male travellers and sailors.",
"title": "Journeys"
},
{
"paragraph_id": 43,
"text": "From the Maldives, he carried on to Sri Lanka and visited Sri Pada and Tenavaram temple. Ibn Battuta's ship almost sank on embarking from Sri Lanka, only for the vessel that came to his rescue to suffer an attack by pirates. Stranded onshore, he worked his way back to the Madurai kingdom in India. Here he spent some time in the court of the short-lived Madurai Sultanate under Ghiyas-ud-Din Muhammad Damghani, from where he returned to the Maldives and boarded a Chinese junk, still intending to reach China and take up his ambassadorial post.",
"title": "Journeys"
},
{
"paragraph_id": 44,
"text": "He reached the port of Chittagong in modern-day Bangladesh intending to travel to Sylhet to meet Shah Jalal, who became so renowned that Ibn Battuta, then in Chittagong, made a one-month journey through the mountains of Kamaru near Sylhet to meet him. On his way to Sylhet, Ibn Battuta was greeted by several of Shah Jalal's disciples who had come to assist him on his journey many days before he had arrived. At the meeting in 1345 CE, Ibn Battuta noted that Shah Jalal was tall and lean, fair in complexion and lived by the mosque in a cave, where his only item of value was a goat he kept for milk, butter, and yogurt. He observed that the companions of the Shah Jalal were foreign and known for their strength and bravery. He also mentions that many people would visit the Shah to seek guidance. Ibn Battuta went further north into Assam, then turned around and continued with his original plan.",
"title": "Journeys"
},
{
"paragraph_id": 45,
"text": "In 1345, Ibn Battuta traveled to Samudra Pasai Sultanate (called \"al-Jawa\") in present-day Aceh, Northern Sumatra, after 40 days voyage from Sunur Kawan. He notes in his travel log that the ruler of Samudra Pasai was a pious Muslim named Sultan Al-Malik Al-Zahir Jamal-ad-Din, who performed his religious duties with utmost zeal and often waged campaigns against animists in the region. The island of Sumatra, according to Ibn Battuta, was rich in camphor, areca nut, cloves, and tin.",
"title": "Journeys"
},
{
"paragraph_id": 46,
"text": "The madh'hab he observed was Imam Al-Shafi‘i, whose customs were similar to those he had previously seen in coastal India, especially among the Mappila Muslims, who were also followers of Imam Al-Shafi‘i. At that time Samudra Pasai marked the end of Dar al-Islam, because no territory east of this was ruled by a Muslim. Here he stayed for about two weeks in the wooden walled town as a guest of the sultan, and then the sultan provided him with supplies and sent him on his way on one of his own junks to China.",
"title": "Journeys"
},
{
"paragraph_id": 47,
"text": "Ibn Battuta first sailed for 21 days to a place called \"Mul Jawa\" (island of Java or Majapahit Java) which was a center of a Hindu empire. The empire spanned 2 months of travel, and ruled over the country of Qaqula and Qamara. He arrived at the walled city named Qaqula/Kakula, and observed that the city had war junks for pirate raiding and collecting tolls and that elephants were employed for various purposes. He met the ruler of Mul Jawa and stayed as a guest for three days.",
"title": "Journeys"
},
{
"paragraph_id": 48,
"text": "Ibn Battuta then sailed to a state called Kaylukari in the land of Tawalisi, where he met Urduja, a local princess. Urduja was a brave warrior, and her people were opponents of the Yuan dynasty. She was described as an \"idolater\", but could write the phrase Bismillah in Islamic calligraphy. The locations of Kaylukari and Tawalisi are disputed. Kaylukari might referred to Po Klong Garai in Champa (now southern Vietnam), and Urduja might be an aristocrat of Champa or Dai Viet. Filipinos widely believe that Kaylukari was in present-day Pangasinan Province of the Philippines. Their opposition to the Mongols might indicate 2 possible locations: Japan and Java (Majapahit). In modern times, Urduja has been featured in Filipino textbooks and films as a national heroine. Numerous other locations have been proposed, ranging from Java to somewhere in Guangdong Province, China. However, Sir Henry Yule and William Henry Scott consider both Tawalisi and Urduja to be entirely fictitious. (See Tawalisi for details.) From Kaylukari, Ibn Battuta finally reached Quanzhou in Fujian Province, China.",
"title": "Journeys"
},
{
"paragraph_id": 49,
"text": "In the year 1345, Ibn Battuta arrived at Quanzhou in China's Fujian province, then under the rule of the Mongol-led Yuan dynasty. One of the first things he noted was that Muslims referred to the city as \"Zaitun\" (meaning olive), but Ibn Battuta could not find any olives anywhere. He mentioned local artists and their mastery in making portraits of newly arrived foreigners; these were for security purposes. Ibn Battuta praised the craftsmen and their silk and porcelain; as well as fruits such as plums and watermelons and the advantages of paper money.",
"title": "Journeys"
},
{
"paragraph_id": 50,
"text": "He described the manufacturing process of large ships in the city of Quanzhou. He also mentioned Chinese cuisine and its usage of animals such as frogs, pigs, and even dogs which were sold in the markets, and noted that the chickens in China were larger than those in the west. Scholars however have pointed out numerous errors given in Ibn Battuta's account of China, for example confusing the Yellow River with the Grand Canal and other waterways, as well as believing that porcelain was made from coal.",
"title": "Journeys"
},
{
"paragraph_id": 51,
"text": "In Quanzhou, Ibn Battuta was welcomed by the head of the local Muslim merchants (possibly a fānzhǎng or \"Leader of Foreigners\" simplified Chinese: 番长; traditional Chinese: 番長; pinyin: fānzhǎng) and Sheikh al-Islam (Imam), who came to meet him with flags, drums, trumpets, and musicians. Ibn Battuta noted that the Muslim populace lived within a separate portion in the city where they had their own mosques, bazaars, and hospitals. In Quanzhou, he met two prominent Iranians, Burhan al-Din of Kazerun and Sharif al-Din from Tabriz (both of whom were influential figures noted in the Yuan History as \"A-mi-li-ding\" and \"Sai-fu-ding\", respectively). While in Quanzhou he ascended the \"Mount of the Hermit\" and briefly visited a well-known Taoist monk in a cave.",
"title": "Journeys"
},
{
"paragraph_id": 52,
"text": "He then travelled south along the Chinese coast to Guangzhou, where he lodged for two weeks with one of the city's wealthy merchants.",
"title": "Journeys"
},
{
"paragraph_id": 53,
"text": "From Guangzhou he went north to Quanzhou and then proceeded to the city of Fuzhou, where he took up residence with Zahir al-Din and met Kawam al-Din and a fellow countryman named Al-Bushri of Ceuta, who had become a wealthy merchant in China. Al-Bushri accompanied Ibn Battuta northwards to Hangzhou and paid for the gifts that Ibn Battuta would present to the Emperor Huizong of Yuan.",
"title": "Journeys"
},
{
"paragraph_id": 54,
"text": "Ibn Battuta said that Hangzhou was one of the largest cities he had ever seen, and he noted its charm, describing that the city sat on a beautiful lake surrounded by gentle green hills. He mentions the city's Muslim quarter and resided as a guest with a family of Egyptian origin. During his stay at Hangzhou he was particularly impressed by the large number of well-crafted and well-painted Chinese wooden ships, with coloured sails and silk awnings, assembling in the canals. Later he attended a banquet of the Yuan administrator of the city named Qurtai, who according to Ibn Battuta, was very fond of the skills of local Chinese conjurers. Ibn Battuta also mentions locals who worshipped a solar deity.",
"title": "Journeys"
},
{
"paragraph_id": 55,
"text": "He described floating through the Grand Canal on a boat watching crop fields, orchids, merchants in black silk, and women in flowered silk and priests also in silk. In Beijing, Ibn Battuta referred to himself as the long-lost ambassador from the Delhi Sultanate and was invited to the Yuan imperial court of Emperor Huizong (who according to Ibn Battuta was worshipped by some people in China). Ibn Batutta noted that the palace of Khanbaliq was made of wood and that the ruler's \"head wife\" (Empress Qi) held processions in her honour.",
"title": "Journeys"
},
{
"paragraph_id": 56,
"text": "Ibn Battuta also wrote he had heard of \"the rampart of Yajuj and Majuj\" that was \"sixty days' travel\" from the city of Zeitun (Quanzhou); Hamilton Alexander Rosskeen Gibb notes that Ibn Battuta believed that the Great Wall of China was built by Dhul-Qarnayn to contain Gog and Magog as mentioned in the Quran. However, Ibn Battuta, who asked about the wall in China, could find no one who had either seen it or knew of anyone who had seen it.",
"title": "Journeys"
},
{
"paragraph_id": 57,
"text": "Ibn Battuta travelled from Beijing to Hangzhou, and then proceeded to Fuzhou. Upon his return to Quanzhou, he soon boarded a Chinese junk owned by the Sultan of Samudera Pasai Sultanate heading for Southeast Asia, whereupon Ibn Battuta was unfairly charged a hefty sum by the crew and lost much of what he had collected during his stay in China.",
"title": "Journeys"
},
{
"paragraph_id": 58,
"text": "Battuta claimed that the Emperor Huizong of Yuan had interred with him in his grave six slave soldiers and four girl slaves. Silver, gold, weapons, and carpets were put into the grave.",
"title": "Journeys"
},
{
"paragraph_id": 59,
"text": "After returning to Quanzhou in 1346, Ibn Battuta began his journey back to Morocco. In Kozhikode, he once again considered throwing himself at the mercy of Muhammad bin Tughluq in Delhi, but thought better of it and decided to carry on to Mecca. On his way to Basra he passed through the Strait of Hormuz, where he learned that Abu Sa'id, last ruler of the Ilkhanate Dynasty had died in Iran. Abu Sa'id's territories had subsequently collapsed due to a fierce civil war between the Iranians and Mongols.",
"title": "Return"
},
{
"paragraph_id": 60,
"text": "In 1348, Ibn Battuta arrived in Damascus with the intention of retracing the route of his first hajj. He then learned that his father had died 15 years earlier and death became the dominant theme for the next year or so. The Black Death had struck and he stopped in Homs as the plague spread through Syria, Palestine, and Arabia. He heard of terrible death tolls in Gaza, but returned to Damascus that July where the death toll had reached 2,400 victims each day. When he stopped in Gaza he found it was depopulated, and in Egypt he stayed at Abu Sir. Reportedly deaths in Cairo had reached levels of 1,100 each day. He made hajj to Mecca then he decided to return to Morocco, nearly a quarter of a century after leaving home. On the way he made one last detour to Sardinia, then in 1349, returned to Tangier by way of Fez, only to discover that his mother had also died a few months before.",
"title": "Return"
},
{
"paragraph_id": 61,
"text": "After a few days in Tangier, Ibn Battuta set out for a trip to the Muslim-controlled territory of al-Andalus on the Iberian Peninsula. King Alfonso XI of Castile and León had threatened to attack Gibraltar, so in 1350, Ibn Battuta joined a group of Muslims leaving Tangier with the intention of defending the port. By the time he arrived, the Black Death had killed Alfonso and the threat of invasion had receded, so he turned the trip into a sight-seeing tour ending up in Granada.",
"title": "Return"
},
{
"paragraph_id": 62,
"text": "After his departure from al-Andalus he decided to travel through Morocco. On his return home, he stopped for a while in Marrakech, which was almost a ghost town following the recent plague and the transfer of the capital to Fez.",
"title": "Return"
},
{
"paragraph_id": 63,
"text": "In the autumn of 1351, Ibn Battuta left Fez and made his way to the town of Sijilmasa on the northern edge of the Sahara in present-day Morocco. There he bought a number of camels and stayed for four months. He set out again with a caravan in February 1352 and after 25 days arrived at the dry salt lake bed of Taghaza with its salt mines. All of the local buildings were made from slabs of salt by the slaves of the Masufa tribe, who cut the salt in thick slabs for transport by camel. Taghaza was a commercial centre and awash with Malian gold, though Ibn Battuta did not form a favourable impression of the place, recording that it was plagued by flies and the water was brackish.",
"title": "Return"
},
{
"paragraph_id": 64,
"text": "After a ten-day stay in Taghaza, the caravan set out for the oasis of Tasarahla (probably Bir al-Ksaib) where it stopped for three days in preparation for the last and most difficult leg of the journey across the vast desert. From Tasarahla, a Masufa scout was sent ahead to the oasis town of Oualata, where he arranged for water to be transported a distance of four days travel where it would meet the thirsty caravan. Oualata was the southern terminus of the trans-Saharan trade route and had recently become part of the Mali Empire. Altogether, the caravan took two months to cross the 1,600 km (990 mi) of desert from Sijilmasa.",
"title": "Return"
},
{
"paragraph_id": 65,
"text": "From there, Ibn Battuta travelled southwest along a river he believed to be the Nile (it was actually the river Niger), until he reached the capital of the Mali Empire. There he met Mansa Suleyman, king since 1341. Ibn Battuta disapproved of the fact that female slaves, servants, and even the daughters of the sultan went about exposing parts of their bodies not befitting a Muslim. He wrote in his Rihla that black Africans were characterised by \"ill manners\" and \"contempt for white men\", and that he \"was long astonished at their feeble intellect and their respect for mean things.\" He left the capital in February accompanied by a local Malian merchant and journeyed overland by camel to Timbuktu. Though in the next two centuries it would become the most important city in the region, at that time it was a small city and relatively unimportant. It was during this journey that Ibn Battuta first encountered a hippopotamus. The animals were feared by the local boatmen and hunted with lances to which strong cords were attached. After a short stay in Timbuktu, Ibn Battuta journeyed down the Niger to Gao in a canoe carved from a single tree. At the time Gao was an important commercial center.",
"title": "Return"
},
{
"paragraph_id": 66,
"text": "After spending a month in Gao, Ibn Battuta set off with a large caravan for the oasis of Takedda. On his journey across the desert, he received a message from the Sultan of Morocco commanding him to return home. He set off for Sijilmasa in September 1353, accompanying a large caravan transporting 600 female slaves, and arrived back in Morocco early in 1354.",
"title": "Return"
},
{
"paragraph_id": 67,
"text": "Ibn Battuta's itinerary gives scholars a glimpse as to when Islam first began to spread into the heart of west Africa.",
"title": "Return"
},
{
"paragraph_id": 68,
"text": "After returning home from his travels in 1354, and at the suggestion of the Marinid ruler of Morocco, Abu Inan Faris, Ibn Battuta dictated an account in Arabic of his journeys to Ibn Juzayy, a scholar whom he had previously met in Granada. The account is the only source for Ibn Battuta's adventures. The full title of the manuscript may be translated as A Masterpiece to Those Who Contemplate the Wonders of Cities and the Marvels of Travelling (تحفة النظار في غرائب الأمصار وعجائب الأسفار, Tuḥfat an-Nuẓẓār fī Gharāʾib al-Amṣār wa ʿAjāʾib al-Asfār). However, it is often simply referred to as The Travels (الرحلة, Rihla), in reference to a standard form of Arabic literature.",
"title": "Works"
},
{
"paragraph_id": 69,
"text": "There is no indication that Ibn Battuta made any notes or had any journal during his twenty-nine years of travelling. When he came to dictate an account of his experiences he had to rely on memory and manuscripts produced by earlier travellers. Ibn Juzayy did not acknowledge his sources and presented some of the earlier descriptions as Ibn Battuta's own observations. When describing Damascus, Mecca, Medina, and some other places in the Middle East, he clearly copied passages from the account by the Andalusian Ibn Jubayr which had been written more than 150 years earlier. Similarly, most of Ibn Juzayy's descriptions of places in Palestine were copied from an account by the 13th-century traveller Muhammad al-Abdari.",
"title": "Works"
},
{
"paragraph_id": 70,
"text": "Scholars do not believe that Ibn Battuta visited all the places he described and argue that in order to provide a comprehensive description of places in the Muslim world, he relied on hearsay evidence and made use of accounts by earlier travellers. For example, it is considered very unlikely that Ibn Battuta made a trip up the Volga River from New Sarai to visit Bolghar and there are serious doubts about a number of other journeys such as his trip to Sana'a in Yemen, his journey from Balkh to Bistam in Khorasan, and his trip around Anatolia.",
"title": "Works"
},
{
"paragraph_id": 71,
"text": "Ibn Battuta's claim that a Maghrebian called \"Abu'l Barakat the Berber\" converted the Maldives to Islam is contradicted by an entirely different story which says that the Maldives were converted to Islam after miracles were performed by a Tabrizi named Maulana Shaikh Yusuf Shams-ud-din according to the Tarikh, the official history of the Maldives.",
"title": "Works"
},
{
"paragraph_id": 72,
"text": "Some scholars have also questioned whether he really visited China. Ibn Battuta may have plagiarized entire sections of his descriptions of China lifted from works by other authors like \"Masalik al-absar fi mamalik al-amsar\" by Shihab al-Umari, Sulaiman al-Tajir, and possibly from Al Juwayni, Rashid al din, and an Alexander romance. Furthermore, Ibn Battuta's description and Marco Polo's writings share extremely similar sections and themes, with some of the same commentary, e.g. it is unlikely that the 3rd Caliph Uthman ibn Affan had someone with the identical name in China who was encountered by Ibn Battuta.",
"title": "Works"
},
{
"paragraph_id": 73,
"text": "However, even if the Rihla is not fully based on what its author personally witnessed, it provides an important account of much of the 14th-century world. Concubines were used by Ibn Battuta such as in Delhi. He wedded several women, divorced at least some of them, and in Damascus, Malabar, Delhi, Bukhara, and the Maldives had children by them or by concubines. Ibn Battuta insulted Greeks as \"enemies of Allah\", drunkards and \"swine eaters\", while at the same time in Ephesus he purchased and used a Greek girl who was one of his many slave girls in his \"harem\" through Byzantium, Khorasan, Africa, and Palestine. It was two decades before he again returned to find out what happened to one of his wives and child in Damascus.",
"title": "Works"
},
{
"paragraph_id": 74,
"text": "Ibn Battuta often experienced culture shock in regions he visited where the local customs of recently converted peoples did not fit in with his orthodox Muslim background. Among the Turks and Mongols, he was astonished at the freedom and respect enjoyed by women and remarked that on seeing a Turkish couple in a bazaar one might assume that the man was the woman's servant when he was in fact her husband. He also felt that dress customs in the Maldives, and some sub-Saharan regions in Africa were too revealing.",
"title": "Works"
},
{
"paragraph_id": 75,
"text": "Little is known about Ibn Battuta's life after completion of his Rihla in 1355. He was appointed a judge in Morocco and died in 1368 or 1369.",
"title": "Works"
},
{
"paragraph_id": 76,
"text": "Ibn Battuta's work was unknown outside the Muslim world until the beginning of the 19th century, when the German traveller-explorer Ulrich Jasper Seetzen (1767–1811) acquired a collection of manuscripts in the Middle East, among which was a 94-page volume containing an abridged version of Ibn Juzayy's text. Three extracts were published in 1818 by the German orientalist Johann Kosegarten. A fourth extract was published the following year. French scholars were alerted to the initial publication by a lengthy review published in the Journal de Savants by the orientalist Silvestre de Sacy.",
"title": "Works"
},
{
"paragraph_id": 77,
"text": "Three copies of another abridged manuscript were acquired by the Swiss traveller Johann Burckhardt and bequeathed to the University of Cambridge. He gave a brief overview of their content in a book published posthumously in 1819. The Arabic text was translated into English by the orientalist Samuel Lee and published in London in 1829.",
"title": "Works"
},
{
"paragraph_id": 78,
"text": "In the 1830s, during the French occupation of Algeria, the Bibliothèque Nationale (BNF) in Paris acquired five manuscripts of Ibn Battuta's travels, in which two were complete. One manuscript containing just the second part of the work is dated 1356 and is believed to be Ibn Juzayy's autograph. The BNF manuscripts were used in 1843 by the Irish-French orientalist Baron de Slane to produce a translation into French of Ibn Battuta's visit to the Sudan. They were also studied by the French scholars Charles Defrémery and Beniamino Sanguinetti. Beginning in 1853 they published a series of four volumes containing a critical edition of the Arabic text together with a translation into French. In their introduction Defrémery and Sanguinetti praised Lee's annotations but were critical of his translation which they claimed lacked precision, even in straightforward passages.",
"title": "Works"
},
{
"paragraph_id": 79,
"text": "In 1929, exactly a century after the publication of Lee's translation, the historian and orientalist Hamilton Gibb published an English translation of selected portions of Defrémery and Sanguinetti's Arabic text. Gibb had proposed to the Hakluyt Society in 1922 that he should prepare an annotated translation of the entire Rihla into English. His intention was to divide the translated text into four volumes, each volume corresponding to one of the volumes published by Defrémery and Sanguinetti. The first volume was not published until 1958. Gibb died in 1971, having completed the first three volumes. The fourth volume was prepared by Charles Beckingham and published in 1994. Defrémery and Sanguinetti's printed text has now been translated into number of other languages.",
"title": "Works"
},
{
"paragraph_id": 80,
"text": "German Islamic studies scholar Ralph Elger views Battuta's travel account as an important literary work but doubts the historicity of much of its content, which he suspects to be a work of fiction compiled and inspired from other contemporary travel reports. Various other scholars have raised similar doubts.",
"title": "Historicity"
},
{
"paragraph_id": 81,
"text": "In 1987, Ross E. Dunn similarly expressed doubts that any evidence would be found to support the narrative of the Rihla, but in 2010, Tim Mackintosh-Smith completed a multi-volume field study in dozens of the locales mentioned in the Rihla, in which he reports on previously unknown manuscripts of Islamic law kept in the archives of Al-Azhar University in Cairo that were copied by Ibn Battuta in Damascus in 1326, corroborating the date in the Rihla of his sojourn in Syria.",
"title": "Historicity"
},
{
"paragraph_id": 82,
"text": "The largest themed mall in Dubai, UAE, the Ibn Battuta Mall is named for him and features both areas designed to recreate the exotic lands he visited on his travels and statuary tableaus depicting scenes from his life history.",
"title": "Present-day cultural references"
},
{
"paragraph_id": 83,
"text": "A giant semblance of Battuta, alongside two others from the history of Arab exploration, the geographer and historian Al Bakri and the navigator and cartographer Ibn Majid, is displayed at the Mobility pavilion at Expo 2020 in Dubai in a section of the exhibition designed by Weta Workshop.",
"title": "Present-day cultural references"
},
{
"paragraph_id": 84,
"text": "Tangier Ibn Battouta Airport is an international airport located in his hometown of Tangier, Morocco.",
"title": "Present-day cultural references"
}
]
| Abu Abdullah Muhammad ibn Battutah, commonly known as Ibn Battuta, was a Maghrebi traveller, explorer and scholar. Over a period of thirty years from 1325 to 1354, Ibn Battuta visited most of North Africa, the Middle East, East Africa, Central Asia, South Asia, Southeast Asia, China, the Iberian Peninsula, and West Africa. Near the end of his life, he dictated an account of his journeys, titled A Gift to Those Who Contemplate the Wonders of Cities and the Marvels of Travelling, but commonly known as The Rihla. Ibn Battuta travelled more than any other explorer in pre-modern history, totalling around 117,000 km (73,000 mi), surpassing Zheng He with about 50,000 km (31,000 mi) and Marco Polo with 24,000 km (15,000 mi). There have been doubts over the historicity of some of Ibn Battuta's travels, particularly as they reach farther East. | 2001-11-09T16:06:34Z | 2023-12-26T00:20:10Z | [
"Template:See also",
"Template:Lang",
"Template:Dead link",
"Template:Short description",
"Template:Pp-semi",
"Template:Use dmy dates",
"Template:Cvt",
"Template:Verification needed",
"Template:Refbegin",
"Template:ISBN",
"Template:Refend",
"Template:Wikiquote",
"Template:Librivox author",
"Template:Refn",
"Template:Convert",
"Template:Better source needed",
"Template:Harvnb",
"Template:Portalbar",
"Template:Other uses",
"Template:Snd",
"Template:Cite news",
"Template:Authority control",
"Template:Citation",
"Template:Webarchive",
"Template:Commons category",
"Template:Sfn",
"Template:Citation needed",
"Template:Further",
"Template:Nbs",
"Template:Rp",
"Template:Location map many",
"Template:Reflist",
"Template:Cite EB1911",
"Template:Islamic geography",
"Template:IPAc-en",
"Template:Cite book",
"Template:Cite magazine",
"Template:Cite journal",
"Template:Notable foreigners who visited China",
"Template:Use British English",
"Template:Infobox person",
"Template:Anchor",
"Template:Fact",
"Template:Notelist",
"Template:Efn",
"Template:Blockquote",
"Template:Zh",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Ibn_Battuta |
15,231 | Integrated Services Digital Network | Integrated Services Digital Network (ISDN) is a set of communication standards for simultaneous digital transmission of voice, video, data, and other network services over the digitalised circuits of the public switched telephone network. Work on the standard began in 1980 at Bell Labs and was formally standardized in 1988 in the CCITT "Red Book". By the time the standard was released, newer networking systems with much greater speeds were available, and ISDN saw relatively little uptake in the wider market. One estimate suggests ISDN use peaked at a worldwide total of 25 million subscribers at a time when 1.3 billion analog lines were in use. ISDN has largely been replaced with digital subscriber line (DSL) systems of much higher performance.
Prior to ISDN, the telephone system consisted of digital links like T1/E1 on the long-distance lines between telephone company offices and analog signals on copper telephone wires to the customers, the "last mile". At the time, the network was viewed as a way to transport voice, with some special services available for data using additional equipment like modems or by providing a T1 on the customer's location. What became ISDN started as an effort to digitize the last mile, originally under the name "Public Switched Digital Capacity" (PSDC). This would allow call routing to be completed in an all-digital system, while also offering a separate data line. The Basic Rate Interface, or BRI, is the standard last-mile connection in the ISDN system, offering two 64 kbit/s "bearer" lines and a single 16 kbit/s "delta" channel for commands and data.
Although ISDN found a number of niche roles and some wider uptake in specific locales, the system was largely ignored and garnered the industry nickname "innovation subscribers didn't need." It found a use for a time for small-office digital connection, using the voice lines for data at 64 kbit/s, sometimes "bonded" to 128 kbit/s, but the introduction of 56 kbit/s modems undercut its value in many roles. It also found use in videoconference systems, where the direct end-to-end connection was desirable. The H.320 standard was designed around its 64 kbit/s data rate. The underlying ISDN concepts found wider use as a replacement for the T1/E1 lines it was originally intended to extend, roughly doubling the performance of those lines.
Since its introduction in 1881, the twisted pair copper line has been installed for telephone use worldwide, with well over a billion individual connections installed by the year 2000. Over the first half of the 20th century, the connection of these lines to form calls was increasingly automated, culminating in the crossbar switches that had largely replaced earlier concepts by the 1950s.
As telephone use surged in the post-WWII era, the problem of connecting the massive number of lines became an area of significant study. Bell Labs' seminal work on digital encoding of voice led to the use of 64 kbit/s as a standard for voice lines (or 56 kbit/s in some systems). In 1962, Robert Aaron of Bell introduced the T1 system, which carried 1.544 Mbit/s of data on a pair of twisted pair lines over a distance of about one mile. This was used in the Bell network to carry traffic between local switch offices, with 24 voice lines at 64 kbit/s and a separate 8 kbit/s line for signaling commands like connecting or hanging up a call. This could be extended over long distances using repeaters in the lines. T1 used a very simple encoding scheme, alternate mark inversion (AMI), which reached only a few percent of the theoretical capacity of the line but was appropriate for 1960s electronics.
By the late 1970s, T1 lines and their faster counterparts, along with all-digital switching systems, had replaced the earlier analog systems for most of the western world, leaving only the customer's equipment and their local end office using analog systems. Digitizing this "last mile" was increasingly seen as the next problem that needed to be solved. However, these connections now represented over 99% of the total telephony network, as the upstream links had increasingly been aggregated into a smaller number of much higher performance systems, especially after the introduction of fiber optic lines. If the system was to become all-digital, a new standard would be needed that was appropriate for the existing customer lines, which might be miles long and of widely varying quality.
Around 1978, Ralph Wyndrum, Barry Bossick and Joe Lechleider of Bell Labs began one such effort to develop a last-mile solution. They studied a number of derivatives of the T1's AMI concept and concluded that a customer-side line could reliably carry about 160 kbit/s of data over a distance of 4 to 5 miles (6.4 to 8.0 km). That would be enough to carry two voice-quality lines at 64 kbit/s as well as a separate 16 kbit/s line for data. At the time, modems were normally 300 bit/s and 1200 bit/s would not become common until the early 1980s and the 2400 bit/s standard would not be completed until 1984. In this market, 16 kbit/s represented a significant advance in performance in addition to being a separate channel that coexists with voice channels.
A key problem was that the customer might only have a single twisted pair line to the location of the handset, so the solution used in T1 with separate upstream and downstream connections was not universally available. With analog connections, the solution was to use echo cancellation, but at the much higher bandwidth of the new concept, this would not be so simple. A debate broke out between teams worldwide about the best solution to this problem; some promoted newer versions of echo cancellation, while others preferred the "ping pong" concept where the direction of data would rapidly switch the line from send to receive at such a high rate it would not be noticeable to the user. John Cioffi had recently demonstrated echo cancellation would work at these speeds, and further suggested that they should consider moving directly to 1.5 Mbit/s performance using this concept. The suggestion was literally laughed off the table (His boss told him to "sit down and shut up") but the echo cancellation concept that was taken up by Joe Lechleider eventually came to win the debate.
Meanwhile, the debate over the encoding scheme itself was also ongoing. As the new standard was to be international, this was even more contentious as several regional digital standards had emerged in the 1960s and 70s and merging them was not going to be easy. To further confuse issues, in 1984 the Bell System was broken up and the US center for development moved to the American National Standards Institute (ANSI) T1D1.3 committee. Thomas Starr of the newly formed Ameritech led this effort and eventually convinced the ANSI group to select the 2B1Q standard proposed by Peter Adams of British Telecom. This standard used an 80 kHz base frequency and encoded two bits per baud to produce the 160 kbit/s base rate. Ultimately Japan selected a different standard, and Germany selected one with three levels instead of four, but all of these could interchange with the ANSI standard.
From an economic perspective, the European Commission sought to liberalize and regulate ISDN across the European Economic Community. The Council of the European Communities adopted Council Recommendation 86/659/EEC in December 1986 for its coordinated introduction within the framework of CEPT. ETSI (the European Telecommunications Standards Institute) was created by CEPT in 1988 and would develop the framework.
With digital-quality voice made possible by ISDN, two separate lines and all-the-time data, the telephony world was convinced there would be high customer demand for such systems in both the home and office. This proved not to be the case. During the lengthy standardization process, new concepts rendered the system largely superfluous. In the office, multi-line digital switches like the Meridian Norstar took over telephone lines while local area networks like Ethernet provided performance around 10 Mbit/s which had become the baseline for inter-computer connections in offices. ISDN offered no real advantages in the voice role and was far from competitive in data. Additionally, modems had continued improving, introducing 9600 bit/s systems in the late 1980s and 14.4 kbit/s in 1991, which significantly eroded ISDN's value proposition for the home customer.
Meanwhile, Lechleider had proposed using ISDN's echo cancellation and 2B1Q encoding on existing T1 connections so that the distance between repeaters could be doubled to about 2 miles (3.2 km). Another standards war broke out, but in 1991 Lechleider's 1.6 Mbit/s "High-Speed Digital Subscriber Line" eventually won this process as well, after Starr drove it through the ANSI T1E1.4 group. A similar standard emerged in Europe to replace their E1 lines, increasing the sampling range from 80 to 100 kHz to achieve 2.048 Mbit/s. By the mid-1990s, these Primary Rate Interface (PRI) lines had largely replaced T1 and E1 between telephone company offices.
Lechleider also believed this higher-speed standard would be much more attractive to customers than ISDN had proven. Unfortunately, at these speeds, the systems suffered from a type of crosstalk known as "NEXT", for "near-end crosstalk". This made longer connections on customer lines difficult. Lechleider noted that NEXT only occurred when similar frequencies were being used, and could be diminished if one of the directions used a different carrier rate, but doing so would reduce the potential bandwidth of that channel. Lechleider suggested that most consumer use would be asymmetric anyway, and that providing a high-speed channel towards the user and a lower speed return would be suitable for many uses.
This work in the early 1990s eventually led to the ADSL concept, which emerged in 1995. An early supporter of the concept was Alcatel, who jumped on ADSL while many other companies were still devoted to ISDN. Krish Prabu stated that "Alcatel will have to invest one billion dollars in ADSL before it makes a profit, but it is worth it." They introduced the first DSL Access Multiplexers (DSLAM), the large multi-modem systems used at the telephony offices, and later introduced customer ADSL modems under the Thomson brand. Alcatel remained the primary vendor of ADSL systems for well over a decade.
ADSL quickly replaced ISDN as the customer-facing solution for last-mile connectivity. ISDN has largely disappeared on the customer side, remaining in use only in niche roles like dedicated teleconferencing systems and similar legacy systems.
Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice, video, and fax, over a single line. Multiple devices can be attached to the line, and used as needed. That means an ISDN line can take care of what were expected to be most people's complete communications needs (apart from broadband Internet access and entertainment television) at a much higher transmission rate, without forcing the purchase of multiple analog phone lines. It also refers to integrated switching and transmission in that telephone switching and carrier wave transmission are integrated rather than separate as in earlier technology.
In ISDN, there are two types of channels, B (for "bearer") and D (for "data"). B channels are used for data (which may include voice), and D channels are intended for signaling and control (but can also be used for data).
There are two ISDN implementations. Basic Rate Interface (BRI), also called basic rate access (BRA) — consists of two B channels, each with bandwidth of 64 kbit/s, and one D channel with a bandwidth of 16 kbit/s. Together these three channels can be designated as 2B+D. Primary Rate Interface (PRI), also called primary rate access (PRA) in Europe — contains a greater number of B channels and a D channel with a bandwidth of 64 kbit/s. The number of B channels for PRI varies according to the nation: in North America and Japan it is 23B+1D, with an aggregate bit rate of 1.544 Mbit/s (T1); in Europe, India and Australia it is 30B+2D, with an aggregate bit rate of 2.048 Mbit/s (E1). Broadband Integrated Services Digital Network (BISDN) is another ISDN implementation and it is able to manage different types of services at the same time. It is primarily used within network backbones and employs ATM.
Another alternative ISDN configuration can be used in which the B channels of an ISDN BRI line are bonded to provide a total duplex bandwidth of 128 kbit/s. This precludes use of the line for voice calls while the internet connection is in use. The B channels of several BRIs can be bonded, a typical use is a 384K videoconferencing channel.
Using bipolar with eight-zero substitution encoding technique, call data is transmitted over the data (B) channels, with the signaling (D) channels used for call setup and management. Once a call is set up, there is a simple 64 kbit/s synchronous bidirectional data channel (actually implemented as two simplex channels, one in each direction) between the end parties, lasting until the call is terminated. There can be as many calls as there are bearer channels, to the same or different end-points. Bearer channels may also be multiplexed into what may be considered single, higher-bandwidth channels via a process called B channel BONDING, or via use of Multi-Link PPP "bundling" or by using an H0, H11, or H12 channel on a PRI.
The D channel can also be used for sending and receiving X.25 data packets, and connection to X.25 packet network, this is specified in X.31. In practice, X.31 was only commercially implemented in the UK, France, Japan and Germany.
A set of reference points are defined in the ISDN standard to refer to certain points between the telco and the end user ISDN equipment.
Most NT-1 devices can perform the functions of the NT2 as well, and so the S and T reference points are generally collapsed into the S/T reference point.
In North America, the NT1 device is considered customer premises equipment (CPE) and must be maintained by the customer, thus, the U interface is provided to the customer. In other locations, the NT1 device is maintained by the telco, and the S/T interface is provided to the customer. In India, service providers provide U interface and an NT1 may be supplied by Service provider as part of service offering.
The entry level interface to ISDN is the Basic Rate Interface (BRI), a 128 kbit/s service delivered over a pair of standard telephone copper wires. The 144 kbit/s overall payload rate is divided into two 64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D' channel or data channel). This is sometimes referred to as 2B+D.
The interface specifies the following network interfaces:
BRI-ISDN is very popular in Europe but is much less common in North America. It is also common in Japan — where it is known as INS64.
The other ISDN access available is the Primary Rate Interface (PRI), which is carried over T-carrier (T1) with 24 time slots (channels) in North America, and over E-carrier (E1) with 32 channels in most other countries. Each channel provides transmission at a 64 kbit/s data rate.
With the E1 carrier, the available channels are divided into 30 bearer (B) channels, one data (D) channel, and one timing and alarm channel. This scheme is often referred to as 30B+2D.
In North America, PRI service is delivered via T1 carriers with only one data channel, often referred to as 23B+D, and a total data rate of 1544 kbit/s. Non-Facility Associated Signalling (NFAS) allows two or more PRI circuits to be controlled by a single D channel, which is sometimes called 23B+D + n*24B. D-channel backup allows for a second D channel in case the primary fails. NFAS is commonly used on a Digital Signal 3 (DS3/T3).
PRI-ISDN is popular throughout the world, especially for connecting private branch exchanges to the public switched telephone network (PSTN).
Even though many network professionals use the term ISDN to refer to the lower-bandwidth BRI circuit, in North America BRI is relatively uncommon whilst PRI circuits serving PBXs are commonplace.
The bearer channel (B) is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz with G.711 encoding. B-channels can also be used to carry data, since they are nothing more than digital channels.
Each one of these channels is known as a DS0.
Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled over RBS lines. This was commonplace in the 20th century, but has since become less so.
X.25 can be carried over the B or D channels of a BRI line, and over the B channels of a PRI line. X.25 over the D channel is used at many point-of-sale (credit card) terminals because it eliminates the modem setup, and because it connects to the central system over a B channel, thereby eliminating the need for modems and making much better use of the central system's telephone lines.
X.25 was also part of an ISDN protocol called "Always On/Dynamic ISDN", or AO/DI. This allowed a user to have a constant multi-link PPP connection to the internet over X.25 on the D channel, and brought up one or two B channels as needed.
In theory, Frame Relay can operate over the D channel of BRIs and PRIs, but it is seldom, if ever, used.
ISDN is a core technology in the telephone industry. A telephone network can be thought of as a collection of wires strung between switching systems. The common electrical specification for the signals on these wires is T1 or E1. Between telephone company switches, the signaling is performed via SS7. Normally, a PBX is connected via a T1 with robbed bit signaling to indicate on-hook or off-hook conditions and MF and DTMF tones to encode the destination number. ISDN is much better because messages can be sent much more quickly than by trying to encode numbers as long (100 ms per digit) tone sequences. This results in faster call setup times. Also, a greater number of features are available and fraud is reduced.
In common use, ISDN is often limited to usage to Q.931 and related protocols, which are a set of signaling protocols establishing and breaking circuit-switched connections, and for advanced calling features for the user. Another usage was the deployment of videoconference systems, where a direct end-to-end connection is desirable. ISDN uses the H.320 standard for audio coding and video coding.
ISDN is also used as a smart-network technology intended to add new services to the public switched telephone network (PSTN) by giving users direct access to end-to-end circuit-switched digital services and as a backup or failsafe circuit solution for critical use data circuits.
One of ISDNs successful use-cases was in the videoconference field, where even small improvements in data rates are useful, but more importantly, its direct end-to-end connection offers lower latency and better reliability than packet-switched networks of the 1990s. The H.320 standard for audio coding and video coding was designed with ISDN in mind, and more specifically its 64 kbit/s basic data rate. including audio codecs such as G.711 (PCM) and G.728 (CELP), and discrete cosine transform (DCT) video codecs such as H.261 and H.263.
ISDN is used heavily by the broadcast industry as a reliable way of switching low-latency, high-quality, long-distance audio circuits. In conjunction with an appropriate codec using MPEG or various manufacturers' proprietary algorithms, an ISDN BRI can be used to send stereo bi-directional audio coded at 128 kbit/s with 20 Hz – 20 kHz audio bandwidth, although commonly the G.722 algorithm is used with a single 64 kbit/s B channel to send much lower latency mono audio at the expense of audio quality. Where very high quality audio is required multiple ISDN BRIs can be used in parallel to provide a higher bandwidth circuit switched connection. BBC Radio 3 commonly makes use of three ISDN BRIs to carry 320 kbit/s audio stream for live outside broadcasts. ISDN BRI services are used to link remote studios, sports grounds and outside broadcasts into the main broadcast studio. ISDN via satellite is used by field reporters around the world. It is also common to use ISDN for the return audio links to remote satellite broadcast vehicles.
In many countries, such as the UK and Australia, ISDN has displaced the older technology of equalised analogue landlines, with these circuits being phased out by telecommunications providers. Use of IP-based streaming codecs such as Comrex ACCESS and ipDTL is becoming more widespread in the broadcast sector, using broadband internet to connect remote studios.
Providing a backup line for business's inter-office and internet connectivity was a popular use of the technology.
A study of the German Department of Science shows the following spread of ISDN-channels per 1,000 inhabitants in 2005:
Telstra provides the business customer with the ISDN services. There are five types of ISDN services which are ISDN2, ISDN2 Enhanced, ISDN10, ISDN20 and ISDN30. Telstra changed the minimum monthly charge for voice and data calls. In general, there are two group of ISDN service types; The Basic Rate services – ISDN 2 or ISDN 2 Enhanced. Another group of types are the Primary Rate services, ISDN 10/20/30. Telstra announced that the new sales of ISDN product would be unavailable as of 31 January 2018. The final exit date of ISDN service and migration to the new service would be confirmed by 2022.
[ORANGE ] offers ISDN services under their product name Numeris (2 B+D), of which a professional Duo and home Itoo version is available. ISDN is generally known as RNIS in France and has widespread availability. The introduction of ADSL is reducing ISDN use for data transfer and Internet access, although it is still common in more rural and outlying areas, and for applications such as business voice and point-of-sale terminals. In 2023, Numeris services will enter a phase-out process. They will be replaced by VoIP services.
In Germany, ISDN was very popular with an installed base of 25 million channels (29% of all subscriber lines in Germany as of 2003 and 20% of all ISDN channels worldwide). Due to the success of ISDN, the number of installed analog lines was decreasing. Deutsche Telekom (DTAG) offered both BRI and PRI. Competing phone companies often offered ISDN only and no analog lines. However, these operators generally offered free hardware that also allows the use of POTS equipment, such as NTBAs ("Network Termination for ISDN Basic rate Access": small devices that bridge the two-wire UK0 line to the four-wire S0 bus) with integrated terminal adapters. Because of the widespread availability of ADSL services, ISDN was primarily used for voice and fax traffic.
Until 2007 ISDN (BRI) and ADSL/VDSL were often bundled on the same line, mainly because the combination of DSL with an analog line had no cost advantage over a combined ISDN-DSL line. This practice turned into an issue for the operators when vendors of ISDN technology stopped manufacturing it and spare parts became hard to come by. Since then phone companies started introducing cheaper xDSL-only products using VoIP for telephony, also in an effort to reduce their costs by operating separate data & voice networks.
Since approximately 2010, most German operators have offered more and more VoIP on top of DSL lines and ceased offering ISDN lines. New ISDN lines have been no longer available in Germany since 2018, existing ISDN lines were phased out from 2016 onwards and existing customers were encouraged to move to DSL-based VoIP products. Deutsche Telekom intended to phase-out by 2018 but postponed the date to 2020, other providers like Vodafone estimate to have their phase-out completed by 2022.
OTE, the incumbent telecommunications operator, offers ISDN BRI (BRA) services in Greece. Following the launch of ADSL in 2003, the importance of ISDN for data transfer began to decrease and is today limited to niche business applications with point-to-point requirements.
Bharat Sanchar Nigam Limited, Reliance Communications and Bharti Airtel are the largest communication service providers, and offer both ISDN BRI and PRI services across the country. Reliance Communications and Bharti Airtel uses the DLC technology for providing these services. With the introduction of broadband technology, the load on bandwidth is being absorbed by ADSL. ISDN continues to be an important backup network for point-to-point leased line customers such as banks, Eseva Centers, Life Insurance Corporation of India, and SBI ATMs.
On April 19, 1988, Japanese telecommunications company NTT began offering nationwide ISDN services trademarked INS Net 64, and INS Net 1500, a fruition of NTT's independent research and trial from the 1970s of what it referred to the INS (Information Network System).
Previously, in April 1985, Japanese digital telephone exchange hardware made by Fujitsu was used to experimentally deploy the world's first I interface ISDN. The I interface, unlike the older and incompatible Y interface, is what modern ISDN services use today.
Since 2000, NTT's ISDN offering have been known as FLET's ISDN, incorporating the "FLET's" brand that NTT uses for all of its ISP offerings.
In Japan, the number of ISDN subscribers dwindled as alternative technologies such as ADSL, cable Internet access, and fiber to the home gained greater popularity. On November 2, 2010, NTT announced plans to migrate their backend from PSTN to the IP network from around 2020 to around 2025. For this migration, ISDN services will be retired, and fiber optic services are recommended as an alternative.
On April 19, 1988, Norwegian telecommunications company Telenor began offering nationwide ISDN services trademarked INS Net 64, and INS Net 1500, a fruition of NTT's independent research and trial from the 1970s of what it referred to the INS (Information Network System).
In the United Kingdom, British Telecom (BT) provides ISDN2e (BRI) as well as ISDN30 (PRI). Until April 2006, they also offered services named Home Highway and Business Highway, which were BRI ISDN-based services that offered integrated analogue connectivity as well as ISDN. Later versions of the Highway products also included built-in USB sockets for direct computer access. Home Highway was bought by many home users, usually for Internet connection, although not as fast as ADSL, because it was available before ADSL and in places where ADSL does not reach.
In early 2015, BT announced their intention to retire the UK's ISDN infrastructure by 2025.
ISDN-BRI never gained popularity as a general use telephone access technology in Canada and the US, and remains a niche product. The service was seen as "a solution in search of a problem", and the extensive array of options and features were difficult for customers to understand and use. ISDN has long been known by derogatory backronyms highlighting these issues, such as It Still Does Nothing, Innovations Subscribers Don't Need, and I Still Don't kNow, or, from the supposed standpoint of telephone companies, I Smell Dollars Now.
Although various minimum bandwidths have been used in definitions of Broadband Internet access, ranging up from 64 kbit/s up to 1.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States FCC, as of 2008, defines broadband as anything above 768 kbit/s. Once the term "broadband" came to be associated with data rates incoming to the customer at 256 kbit/s or more, and alternatives like ADSL grew in popularity, the consumer market for BRI did not develop. Its only remaining advantage is that, while ADSL has a functional distance limitation and can use ADSL loop extenders, BRI has a greater limit and can use repeaters. As such, BRI may be acceptable for customers who are too remote for ADSL. Widespread use of BRI is further stymied by some small North American CLECs such as CenturyTel having given up on it and not providing Internet access using it. However, AT&T in most states (especially the former SBC/SWB territory) will still install an ISDN BRI line anywhere a normal analog line can be placed and the monthly charge is roughly $55.
ISDN-BRI is currently primarily used in industries with specialized and very specific needs. High-end videoconferencing hardware can bond up to 8 B-channels together (using a BRI circuit for every 2 channels) to provide digital, circuit-switched video connections to almost anywhere in the world. This is very expensive, and is being replaced by IP-based conferencing, but where cost concern is less of an issue than predictable quality and where a QoS-enabled IP does not exist, BRI is the preferred choice.
Most modern non-VoIP PBXs use ISDN-PRI circuits. These are connected via T1 lines with the central office switch, replacing older analog two-way and direct inward dialing (DID) trunks. PRI is capable of delivering Calling Line Identification (CLID) in both directions so that the telephone number of an extension, rather than a company's main number, can be sent. It is still commonly used in recording studios and some radio programs, when a voice-over actor or host is in one studio conducting remote work, but the director and producer are in a studio at another location. The ISDN protocol delivers channelized, not-over-the-Internet service, powerful call setup and routing features, faster setup and tear down, superior audio fidelity as compared to plain old telephone service (POTS), lower delay and, at higher densities, lower cost.
In 2013, Verizon announced it would no longer take orders for ISDN service in the Northeastern United States. | [
{
"paragraph_id": 0,
"text": "Integrated Services Digital Network (ISDN) is a set of communication standards for simultaneous digital transmission of voice, video, data, and other network services over the digitalised circuits of the public switched telephone network. Work on the standard began in 1980 at Bell Labs and was formally standardized in 1988 in the CCITT \"Red Book\". By the time the standard was released, newer networking systems with much greater speeds were available, and ISDN saw relatively little uptake in the wider market. One estimate suggests ISDN use peaked at a worldwide total of 25 million subscribers at a time when 1.3 billion analog lines were in use. ISDN has largely been replaced with digital subscriber line (DSL) systems of much higher performance.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Prior to ISDN, the telephone system consisted of digital links like T1/E1 on the long-distance lines between telephone company offices and analog signals on copper telephone wires to the customers, the \"last mile\". At the time, the network was viewed as a way to transport voice, with some special services available for data using additional equipment like modems or by providing a T1 on the customer's location. What became ISDN started as an effort to digitize the last mile, originally under the name \"Public Switched Digital Capacity\" (PSDC). This would allow call routing to be completed in an all-digital system, while also offering a separate data line. The Basic Rate Interface, or BRI, is the standard last-mile connection in the ISDN system, offering two 64 kbit/s \"bearer\" lines and a single 16 kbit/s \"delta\" channel for commands and data.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Although ISDN found a number of niche roles and some wider uptake in specific locales, the system was largely ignored and garnered the industry nickname \"innovation subscribers didn't need.\" It found a use for a time for small-office digital connection, using the voice lines for data at 64 kbit/s, sometimes \"bonded\" to 128 kbit/s, but the introduction of 56 kbit/s modems undercut its value in many roles. It also found use in videoconference systems, where the direct end-to-end connection was desirable. The H.320 standard was designed around its 64 kbit/s data rate. The underlying ISDN concepts found wider use as a replacement for the T1/E1 lines it was originally intended to extend, roughly doubling the performance of those lines.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since its introduction in 1881, the twisted pair copper line has been installed for telephone use worldwide, with well over a billion individual connections installed by the year 2000. Over the first half of the 20th century, the connection of these lines to form calls was increasingly automated, culminating in the crossbar switches that had largely replaced earlier concepts by the 1950s.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "As telephone use surged in the post-WWII era, the problem of connecting the massive number of lines became an area of significant study. Bell Labs' seminal work on digital encoding of voice led to the use of 64 kbit/s as a standard for voice lines (or 56 kbit/s in some systems). In 1962, Robert Aaron of Bell introduced the T1 system, which carried 1.544 Mbit/s of data on a pair of twisted pair lines over a distance of about one mile. This was used in the Bell network to carry traffic between local switch offices, with 24 voice lines at 64 kbit/s and a separate 8 kbit/s line for signaling commands like connecting or hanging up a call. This could be extended over long distances using repeaters in the lines. T1 used a very simple encoding scheme, alternate mark inversion (AMI), which reached only a few percent of the theoretical capacity of the line but was appropriate for 1960s electronics.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "By the late 1970s, T1 lines and their faster counterparts, along with all-digital switching systems, had replaced the earlier analog systems for most of the western world, leaving only the customer's equipment and their local end office using analog systems. Digitizing this \"last mile\" was increasingly seen as the next problem that needed to be solved. However, these connections now represented over 99% of the total telephony network, as the upstream links had increasingly been aggregated into a smaller number of much higher performance systems, especially after the introduction of fiber optic lines. If the system was to become all-digital, a new standard would be needed that was appropriate for the existing customer lines, which might be miles long and of widely varying quality.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Around 1978, Ralph Wyndrum, Barry Bossick and Joe Lechleider of Bell Labs began one such effort to develop a last-mile solution. They studied a number of derivatives of the T1's AMI concept and concluded that a customer-side line could reliably carry about 160 kbit/s of data over a distance of 4 to 5 miles (6.4 to 8.0 km). That would be enough to carry two voice-quality lines at 64 kbit/s as well as a separate 16 kbit/s line for data. At the time, modems were normally 300 bit/s and 1200 bit/s would not become common until the early 1980s and the 2400 bit/s standard would not be completed until 1984. In this market, 16 kbit/s represented a significant advance in performance in addition to being a separate channel that coexists with voice channels.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A key problem was that the customer might only have a single twisted pair line to the location of the handset, so the solution used in T1 with separate upstream and downstream connections was not universally available. With analog connections, the solution was to use echo cancellation, but at the much higher bandwidth of the new concept, this would not be so simple. A debate broke out between teams worldwide about the best solution to this problem; some promoted newer versions of echo cancellation, while others preferred the \"ping pong\" concept where the direction of data would rapidly switch the line from send to receive at such a high rate it would not be noticeable to the user. John Cioffi had recently demonstrated echo cancellation would work at these speeds, and further suggested that they should consider moving directly to 1.5 Mbit/s performance using this concept. The suggestion was literally laughed off the table (His boss told him to \"sit down and shut up\") but the echo cancellation concept that was taken up by Joe Lechleider eventually came to win the debate.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Meanwhile, the debate over the encoding scheme itself was also ongoing. As the new standard was to be international, this was even more contentious as several regional digital standards had emerged in the 1960s and 70s and merging them was not going to be easy. To further confuse issues, in 1984 the Bell System was broken up and the US center for development moved to the American National Standards Institute (ANSI) T1D1.3 committee. Thomas Starr of the newly formed Ameritech led this effort and eventually convinced the ANSI group to select the 2B1Q standard proposed by Peter Adams of British Telecom. This standard used an 80 kHz base frequency and encoded two bits per baud to produce the 160 kbit/s base rate. Ultimately Japan selected a different standard, and Germany selected one with three levels instead of four, but all of these could interchange with the ANSI standard.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "From an economic perspective, the European Commission sought to liberalize and regulate ISDN across the European Economic Community. The Council of the European Communities adopted Council Recommendation 86/659/EEC in December 1986 for its coordinated introduction within the framework of CEPT. ETSI (the European Telecommunications Standards Institute) was created by CEPT in 1988 and would develop the framework.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "With digital-quality voice made possible by ISDN, two separate lines and all-the-time data, the telephony world was convinced there would be high customer demand for such systems in both the home and office. This proved not to be the case. During the lengthy standardization process, new concepts rendered the system largely superfluous. In the office, multi-line digital switches like the Meridian Norstar took over telephone lines while local area networks like Ethernet provided performance around 10 Mbit/s which had become the baseline for inter-computer connections in offices. ISDN offered no real advantages in the voice role and was far from competitive in data. Additionally, modems had continued improving, introducing 9600 bit/s systems in the late 1980s and 14.4 kbit/s in 1991, which significantly eroded ISDN's value proposition for the home customer.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Meanwhile, Lechleider had proposed using ISDN's echo cancellation and 2B1Q encoding on existing T1 connections so that the distance between repeaters could be doubled to about 2 miles (3.2 km). Another standards war broke out, but in 1991 Lechleider's 1.6 Mbit/s \"High-Speed Digital Subscriber Line\" eventually won this process as well, after Starr drove it through the ANSI T1E1.4 group. A similar standard emerged in Europe to replace their E1 lines, increasing the sampling range from 80 to 100 kHz to achieve 2.048 Mbit/s. By the mid-1990s, these Primary Rate Interface (PRI) lines had largely replaced T1 and E1 between telephone company offices.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Lechleider also believed this higher-speed standard would be much more attractive to customers than ISDN had proven. Unfortunately, at these speeds, the systems suffered from a type of crosstalk known as \"NEXT\", for \"near-end crosstalk\". This made longer connections on customer lines difficult. Lechleider noted that NEXT only occurred when similar frequencies were being used, and could be diminished if one of the directions used a different carrier rate, but doing so would reduce the potential bandwidth of that channel. Lechleider suggested that most consumer use would be asymmetric anyway, and that providing a high-speed channel towards the user and a lower speed return would be suitable for many uses.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "This work in the early 1990s eventually led to the ADSL concept, which emerged in 1995. An early supporter of the concept was Alcatel, who jumped on ADSL while many other companies were still devoted to ISDN. Krish Prabu stated that \"Alcatel will have to invest one billion dollars in ADSL before it makes a profit, but it is worth it.\" They introduced the first DSL Access Multiplexers (DSLAM), the large multi-modem systems used at the telephony offices, and later introduced customer ADSL modems under the Thomson brand. Alcatel remained the primary vendor of ADSL systems for well over a decade.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "ADSL quickly replaced ISDN as the customer-facing solution for last-mile connectivity. ISDN has largely disappeared on the customer side, remaining in use only in niche roles like dedicated teleconferencing systems and similar legacy systems.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice, video, and fax, over a single line. Multiple devices can be attached to the line, and used as needed. That means an ISDN line can take care of what were expected to be most people's complete communications needs (apart from broadband Internet access and entertainment television) at a much higher transmission rate, without forcing the purchase of multiple analog phone lines. It also refers to integrated switching and transmission in that telephone switching and carrier wave transmission are integrated rather than separate as in earlier technology.",
"title": "Design"
},
{
"paragraph_id": 16,
"text": "In ISDN, there are two types of channels, B (for \"bearer\") and D (for \"data\"). B channels are used for data (which may include voice), and D channels are intended for signaling and control (but can also be used for data).",
"title": "Design"
},
{
"paragraph_id": 17,
"text": "There are two ISDN implementations. Basic Rate Interface (BRI), also called basic rate access (BRA) — consists of two B channels, each with bandwidth of 64 kbit/s, and one D channel with a bandwidth of 16 kbit/s. Together these three channels can be designated as 2B+D. Primary Rate Interface (PRI), also called primary rate access (PRA) in Europe — contains a greater number of B channels and a D channel with a bandwidth of 64 kbit/s. The number of B channels for PRI varies according to the nation: in North America and Japan it is 23B+1D, with an aggregate bit rate of 1.544 Mbit/s (T1); in Europe, India and Australia it is 30B+2D, with an aggregate bit rate of 2.048 Mbit/s (E1). Broadband Integrated Services Digital Network (BISDN) is another ISDN implementation and it is able to manage different types of services at the same time. It is primarily used within network backbones and employs ATM.",
"title": "Design"
},
{
"paragraph_id": 18,
"text": "Another alternative ISDN configuration can be used in which the B channels of an ISDN BRI line are bonded to provide a total duplex bandwidth of 128 kbit/s. This precludes use of the line for voice calls while the internet connection is in use. The B channels of several BRIs can be bonded, a typical use is a 384K videoconferencing channel.",
"title": "Design"
},
{
"paragraph_id": 19,
"text": "Using bipolar with eight-zero substitution encoding technique, call data is transmitted over the data (B) channels, with the signaling (D) channels used for call setup and management. Once a call is set up, there is a simple 64 kbit/s synchronous bidirectional data channel (actually implemented as two simplex channels, one in each direction) between the end parties, lasting until the call is terminated. There can be as many calls as there are bearer channels, to the same or different end-points. Bearer channels may also be multiplexed into what may be considered single, higher-bandwidth channels via a process called B channel BONDING, or via use of Multi-Link PPP \"bundling\" or by using an H0, H11, or H12 channel on a PRI.",
"title": "Design"
},
{
"paragraph_id": 20,
"text": "The D channel can also be used for sending and receiving X.25 data packets, and connection to X.25 packet network, this is specified in X.31. In practice, X.31 was only commercially implemented in the UK, France, Japan and Germany.",
"title": "Design"
},
{
"paragraph_id": 21,
"text": "A set of reference points are defined in the ISDN standard to refer to certain points between the telco and the end user ISDN equipment.",
"title": "Design"
},
{
"paragraph_id": 22,
"text": "Most NT-1 devices can perform the functions of the NT2 as well, and so the S and T reference points are generally collapsed into the S/T reference point.",
"title": "Design"
},
{
"paragraph_id": 23,
"text": "In North America, the NT1 device is considered customer premises equipment (CPE) and must be maintained by the customer, thus, the U interface is provided to the customer. In other locations, the NT1 device is maintained by the telco, and the S/T interface is provided to the customer. In India, service providers provide U interface and an NT1 may be supplied by Service provider as part of service offering.",
"title": "Design"
},
{
"paragraph_id": 24,
"text": "The entry level interface to ISDN is the Basic Rate Interface (BRI), a 128 kbit/s service delivered over a pair of standard telephone copper wires. The 144 kbit/s overall payload rate is divided into two 64 kbit/s bearer channels ('B' channels) and one 16 kbit/s signaling channel ('D' channel or data channel). This is sometimes referred to as 2B+D.",
"title": "Design"
},
{
"paragraph_id": 25,
"text": "The interface specifies the following network interfaces:",
"title": "Design"
},
{
"paragraph_id": 26,
"text": "BRI-ISDN is very popular in Europe but is much less common in North America. It is also common in Japan — where it is known as INS64.",
"title": "Design"
},
{
"paragraph_id": 27,
"text": "The other ISDN access available is the Primary Rate Interface (PRI), which is carried over T-carrier (T1) with 24 time slots (channels) in North America, and over E-carrier (E1) with 32 channels in most other countries. Each channel provides transmission at a 64 kbit/s data rate.",
"title": "Design"
},
{
"paragraph_id": 28,
"text": "With the E1 carrier, the available channels are divided into 30 bearer (B) channels, one data (D) channel, and one timing and alarm channel. This scheme is often referred to as 30B+2D.",
"title": "Design"
},
{
"paragraph_id": 29,
"text": "In North America, PRI service is delivered via T1 carriers with only one data channel, often referred to as 23B+D, and a total data rate of 1544 kbit/s. Non-Facility Associated Signalling (NFAS) allows two or more PRI circuits to be controlled by a single D channel, which is sometimes called 23B+D + n*24B. D-channel backup allows for a second D channel in case the primary fails. NFAS is commonly used on a Digital Signal 3 (DS3/T3).",
"title": "Design"
},
{
"paragraph_id": 30,
"text": "PRI-ISDN is popular throughout the world, especially for connecting private branch exchanges to the public switched telephone network (PSTN).",
"title": "Design"
},
{
"paragraph_id": 31,
"text": "Even though many network professionals use the term ISDN to refer to the lower-bandwidth BRI circuit, in North America BRI is relatively uncommon whilst PRI circuits serving PBXs are commonplace.",
"title": "Design"
},
{
"paragraph_id": 32,
"text": "The bearer channel (B) is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz with G.711 encoding. B-channels can also be used to carry data, since they are nothing more than digital channels.",
"title": "Design"
},
{
"paragraph_id": 33,
"text": "Each one of these channels is known as a DS0.",
"title": "Design"
},
{
"paragraph_id": 34,
"text": "Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled over RBS lines. This was commonplace in the 20th century, but has since become less so.",
"title": "Design"
},
{
"paragraph_id": 35,
"text": "X.25 can be carried over the B or D channels of a BRI line, and over the B channels of a PRI line. X.25 over the D channel is used at many point-of-sale (credit card) terminals because it eliminates the modem setup, and because it connects to the central system over a B channel, thereby eliminating the need for modems and making much better use of the central system's telephone lines.",
"title": "Design"
},
{
"paragraph_id": 36,
"text": "X.25 was also part of an ISDN protocol called \"Always On/Dynamic ISDN\", or AO/DI. This allowed a user to have a constant multi-link PPP connection to the internet over X.25 on the D channel, and brought up one or two B channels as needed.",
"title": "Design"
},
{
"paragraph_id": 37,
"text": "In theory, Frame Relay can operate over the D channel of BRIs and PRIs, but it is seldom, if ever, used.",
"title": "Design"
},
{
"paragraph_id": 38,
"text": "ISDN is a core technology in the telephone industry. A telephone network can be thought of as a collection of wires strung between switching systems. The common electrical specification for the signals on these wires is T1 or E1. Between telephone company switches, the signaling is performed via SS7. Normally, a PBX is connected via a T1 with robbed bit signaling to indicate on-hook or off-hook conditions and MF and DTMF tones to encode the destination number. ISDN is much better because messages can be sent much more quickly than by trying to encode numbers as long (100 ms per digit) tone sequences. This results in faster call setup times. Also, a greater number of features are available and fraud is reduced.",
"title": "Uses"
},
{
"paragraph_id": 39,
"text": "In common use, ISDN is often limited to usage to Q.931 and related protocols, which are a set of signaling protocols establishing and breaking circuit-switched connections, and for advanced calling features for the user. Another usage was the deployment of videoconference systems, where a direct end-to-end connection is desirable. ISDN uses the H.320 standard for audio coding and video coding.",
"title": "Uses"
},
{
"paragraph_id": 40,
"text": "ISDN is also used as a smart-network technology intended to add new services to the public switched telephone network (PSTN) by giving users direct access to end-to-end circuit-switched digital services and as a backup or failsafe circuit solution for critical use data circuits.",
"title": "Uses"
},
{
"paragraph_id": 41,
"text": "One of ISDNs successful use-cases was in the videoconference field, where even small improvements in data rates are useful, but more importantly, its direct end-to-end connection offers lower latency and better reliability than packet-switched networks of the 1990s. The H.320 standard for audio coding and video coding was designed with ISDN in mind, and more specifically its 64 kbit/s basic data rate. including audio codecs such as G.711 (PCM) and G.728 (CELP), and discrete cosine transform (DCT) video codecs such as H.261 and H.263.",
"title": "Uses"
},
{
"paragraph_id": 42,
"text": "ISDN is used heavily by the broadcast industry as a reliable way of switching low-latency, high-quality, long-distance audio circuits. In conjunction with an appropriate codec using MPEG or various manufacturers' proprietary algorithms, an ISDN BRI can be used to send stereo bi-directional audio coded at 128 kbit/s with 20 Hz – 20 kHz audio bandwidth, although commonly the G.722 algorithm is used with a single 64 kbit/s B channel to send much lower latency mono audio at the expense of audio quality. Where very high quality audio is required multiple ISDN BRIs can be used in parallel to provide a higher bandwidth circuit switched connection. BBC Radio 3 commonly makes use of three ISDN BRIs to carry 320 kbit/s audio stream for live outside broadcasts. ISDN BRI services are used to link remote studios, sports grounds and outside broadcasts into the main broadcast studio. ISDN via satellite is used by field reporters around the world. It is also common to use ISDN for the return audio links to remote satellite broadcast vehicles.",
"title": "Uses"
},
{
"paragraph_id": 43,
"text": "In many countries, such as the UK and Australia, ISDN has displaced the older technology of equalised analogue landlines, with these circuits being phased out by telecommunications providers. Use of IP-based streaming codecs such as Comrex ACCESS and ipDTL is becoming more widespread in the broadcast sector, using broadband internet to connect remote studios.",
"title": "Uses"
},
{
"paragraph_id": 44,
"text": "Providing a backup line for business's inter-office and internet connectivity was a popular use of the technology.",
"title": "Uses"
},
{
"paragraph_id": 45,
"text": "A study of the German Department of Science shows the following spread of ISDN-channels per 1,000 inhabitants in 2005:",
"title": "International deployment"
},
{
"paragraph_id": 46,
"text": "Telstra provides the business customer with the ISDN services. There are five types of ISDN services which are ISDN2, ISDN2 Enhanced, ISDN10, ISDN20 and ISDN30. Telstra changed the minimum monthly charge for voice and data calls. In general, there are two group of ISDN service types; The Basic Rate services – ISDN 2 or ISDN 2 Enhanced. Another group of types are the Primary Rate services, ISDN 10/20/30. Telstra announced that the new sales of ISDN product would be unavailable as of 31 January 2018. The final exit date of ISDN service and migration to the new service would be confirmed by 2022.",
"title": "International deployment"
},
{
"paragraph_id": 47,
"text": "[ORANGE ] offers ISDN services under their product name Numeris (2 B+D), of which a professional Duo and home Itoo version is available. ISDN is generally known as RNIS in France and has widespread availability. The introduction of ADSL is reducing ISDN use for data transfer and Internet access, although it is still common in more rural and outlying areas, and for applications such as business voice and point-of-sale terminals. In 2023, Numeris services will enter a phase-out process. They will be replaced by VoIP services.",
"title": "International deployment"
},
{
"paragraph_id": 48,
"text": "In Germany, ISDN was very popular with an installed base of 25 million channels (29% of all subscriber lines in Germany as of 2003 and 20% of all ISDN channels worldwide). Due to the success of ISDN, the number of installed analog lines was decreasing. Deutsche Telekom (DTAG) offered both BRI and PRI. Competing phone companies often offered ISDN only and no analog lines. However, these operators generally offered free hardware that also allows the use of POTS equipment, such as NTBAs (\"Network Termination for ISDN Basic rate Access\": small devices that bridge the two-wire UK0 line to the four-wire S0 bus) with integrated terminal adapters. Because of the widespread availability of ADSL services, ISDN was primarily used for voice and fax traffic.",
"title": "International deployment"
},
{
"paragraph_id": 49,
"text": "Until 2007 ISDN (BRI) and ADSL/VDSL were often bundled on the same line, mainly because the combination of DSL with an analog line had no cost advantage over a combined ISDN-DSL line. This practice turned into an issue for the operators when vendors of ISDN technology stopped manufacturing it and spare parts became hard to come by. Since then phone companies started introducing cheaper xDSL-only products using VoIP for telephony, also in an effort to reduce their costs by operating separate data & voice networks.",
"title": "International deployment"
},
{
"paragraph_id": 50,
"text": "Since approximately 2010, most German operators have offered more and more VoIP on top of DSL lines and ceased offering ISDN lines. New ISDN lines have been no longer available in Germany since 2018, existing ISDN lines were phased out from 2016 onwards and existing customers were encouraged to move to DSL-based VoIP products. Deutsche Telekom intended to phase-out by 2018 but postponed the date to 2020, other providers like Vodafone estimate to have their phase-out completed by 2022.",
"title": "International deployment"
},
{
"paragraph_id": 51,
"text": "OTE, the incumbent telecommunications operator, offers ISDN BRI (BRA) services in Greece. Following the launch of ADSL in 2003, the importance of ISDN for data transfer began to decrease and is today limited to niche business applications with point-to-point requirements.",
"title": "International deployment"
},
{
"paragraph_id": 52,
"text": "Bharat Sanchar Nigam Limited, Reliance Communications and Bharti Airtel are the largest communication service providers, and offer both ISDN BRI and PRI services across the country. Reliance Communications and Bharti Airtel uses the DLC technology for providing these services. With the introduction of broadband technology, the load on bandwidth is being absorbed by ADSL. ISDN continues to be an important backup network for point-to-point leased line customers such as banks, Eseva Centers, Life Insurance Corporation of India, and SBI ATMs.",
"title": "International deployment"
},
{
"paragraph_id": 53,
"text": "On April 19, 1988, Japanese telecommunications company NTT began offering nationwide ISDN services trademarked INS Net 64, and INS Net 1500, a fruition of NTT's independent research and trial from the 1970s of what it referred to the INS (Information Network System).",
"title": "International deployment"
},
{
"paragraph_id": 54,
"text": "Previously, in April 1985, Japanese digital telephone exchange hardware made by Fujitsu was used to experimentally deploy the world's first I interface ISDN. The I interface, unlike the older and incompatible Y interface, is what modern ISDN services use today.",
"title": "International deployment"
},
{
"paragraph_id": 55,
"text": "Since 2000, NTT's ISDN offering have been known as FLET's ISDN, incorporating the \"FLET's\" brand that NTT uses for all of its ISP offerings.",
"title": "International deployment"
},
{
"paragraph_id": 56,
"text": "In Japan, the number of ISDN subscribers dwindled as alternative technologies such as ADSL, cable Internet access, and fiber to the home gained greater popularity. On November 2, 2010, NTT announced plans to migrate their backend from PSTN to the IP network from around 2020 to around 2025. For this migration, ISDN services will be retired, and fiber optic services are recommended as an alternative.",
"title": "International deployment"
},
{
"paragraph_id": 57,
"text": "On April 19, 1988, Norwegian telecommunications company Telenor began offering nationwide ISDN services trademarked INS Net 64, and INS Net 1500, a fruition of NTT's independent research and trial from the 1970s of what it referred to the INS (Information Network System).",
"title": "International deployment"
},
{
"paragraph_id": 58,
"text": "In the United Kingdom, British Telecom (BT) provides ISDN2e (BRI) as well as ISDN30 (PRI). Until April 2006, they also offered services named Home Highway and Business Highway, which were BRI ISDN-based services that offered integrated analogue connectivity as well as ISDN. Later versions of the Highway products also included built-in USB sockets for direct computer access. Home Highway was bought by many home users, usually for Internet connection, although not as fast as ADSL, because it was available before ADSL and in places where ADSL does not reach.",
"title": "International deployment"
},
{
"paragraph_id": 59,
"text": "In early 2015, BT announced their intention to retire the UK's ISDN infrastructure by 2025.",
"title": "International deployment"
},
{
"paragraph_id": 60,
"text": "ISDN-BRI never gained popularity as a general use telephone access technology in Canada and the US, and remains a niche product. The service was seen as \"a solution in search of a problem\", and the extensive array of options and features were difficult for customers to understand and use. ISDN has long been known by derogatory backronyms highlighting these issues, such as It Still Does Nothing, Innovations Subscribers Don't Need, and I Still Don't kNow, or, from the supposed standpoint of telephone companies, I Smell Dollars Now.",
"title": "International deployment"
},
{
"paragraph_id": 61,
"text": "Although various minimum bandwidths have been used in definitions of Broadband Internet access, ranging up from 64 kbit/s up to 1.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States FCC, as of 2008, defines broadband as anything above 768 kbit/s. Once the term \"broadband\" came to be associated with data rates incoming to the customer at 256 kbit/s or more, and alternatives like ADSL grew in popularity, the consumer market for BRI did not develop. Its only remaining advantage is that, while ADSL has a functional distance limitation and can use ADSL loop extenders, BRI has a greater limit and can use repeaters. As such, BRI may be acceptable for customers who are too remote for ADSL. Widespread use of BRI is further stymied by some small North American CLECs such as CenturyTel having given up on it and not providing Internet access using it. However, AT&T in most states (especially the former SBC/SWB territory) will still install an ISDN BRI line anywhere a normal analog line can be placed and the monthly charge is roughly $55.",
"title": "International deployment"
},
{
"paragraph_id": 62,
"text": "ISDN-BRI is currently primarily used in industries with specialized and very specific needs. High-end videoconferencing hardware can bond up to 8 B-channels together (using a BRI circuit for every 2 channels) to provide digital, circuit-switched video connections to almost anywhere in the world. This is very expensive, and is being replaced by IP-based conferencing, but where cost concern is less of an issue than predictable quality and where a QoS-enabled IP does not exist, BRI is the preferred choice.",
"title": "International deployment"
},
{
"paragraph_id": 63,
"text": "Most modern non-VoIP PBXs use ISDN-PRI circuits. These are connected via T1 lines with the central office switch, replacing older analog two-way and direct inward dialing (DID) trunks. PRI is capable of delivering Calling Line Identification (CLID) in both directions so that the telephone number of an extension, rather than a company's main number, can be sent. It is still commonly used in recording studios and some radio programs, when a voice-over actor or host is in one studio conducting remote work, but the director and producer are in a studio at another location. The ISDN protocol delivers channelized, not-over-the-Internet service, powerful call setup and routing features, faster setup and tear down, superior audio fidelity as compared to plain old telephone service (POTS), lower delay and, at higher densities, lower cost.",
"title": "International deployment"
},
{
"paragraph_id": 64,
"text": "In 2013, Verizon announced it would no longer take orders for ISDN service in the Northeastern United States.",
"title": "International deployment"
}
]
| Integrated Services Digital Network (ISDN) is a set of communication standards for simultaneous digital transmission of voice, video, data, and other network services over the digitalised circuits of the public switched telephone network. Work on the standard began in 1980 at Bell Labs and was formally standardized in 1988 in the CCITT "Red Book". By the time the standard was released, newer networking systems with much greater speeds were available, and ISDN saw relatively little uptake in the wider market. One estimate suggests ISDN use peaked at a worldwide total of 25 million subscribers at a time when 1.3 billion analog lines were in use. ISDN has largely been replaced with digital subscriber line (DSL) systems of much higher performance. Prior to ISDN, the telephone system consisted of digital links like T1/E1 on the long-distance lines between telephone company offices and analog signals on copper telephone wires to the customers, the "last mile". At the time, the network was viewed as a way to transport voice, with some special services available for data using additional equipment like modems or by providing a T1 on the customer's location. What became ISDN started as an effort to digitize the last mile, originally under the name "Public Switched Digital Capacity" (PSDC). This would allow call routing to be completed in an all-digital system, while also offering a separate data line. The Basic Rate Interface, or BRI, is the standard last-mile connection in the ISDN system, offering two 64 kbit/s "bearer" lines and a single 16 kbit/s "delta" channel for commands and data. Although ISDN found a number of niche roles and some wider uptake in specific locales, the system was largely ignored and garnered the industry nickname "innovation subscribers didn't need." It found a use for a time for small-office digital connection, using the voice lines for data at 64 kbit/s, sometimes "bonded" to 128 kbit/s, but the introduction of 56 kbit/s modems undercut its value in many roles. It also found use in videoconference systems, where the direct end-to-end connection was desirable. The H.320 standard was designed around its 64 kbit/s data rate. The underlying ISDN concepts found wider use as a replacement for the T1/E1 lines it was originally intended to extend, roughly doubling the performance of those lines. | 2001-11-09T22:31:47Z | 2023-12-10T03:51:54Z | [
"Template:Cite book",
"Template:Cite journal",
"Template:Dead link",
"Template:Sfn",
"Template:Convert",
"Template:Citation",
"Template:Cbignore",
"Template:Wikibooks",
"Template:Short description",
"Template:Distinguish",
"Template:IPstack",
"Template:Citation needed",
"Template:Telecommunications",
"Template:Authority control",
"Template:Redirect",
"Template:Nbsp",
"Template:When",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Internet Access"
]
| https://en.wikipedia.org/wiki/Integrated_Services_Digital_Network |
15,235 | Genomic imprinting | Genomic imprinting is an epigenetic phenomenon that causes genes to be expressed or not, depending on whether they are inherited from the mother or the father. Genes can also be partially imprinted. Partial imprinting occurs when alleles from both parents are differently expressed rather than complete expression and complete suppression of one parent's allele. Forms of genomic imprinting have been demonstrated in fungi, plants and animals. In 2014, there were about 150 imprinted genes known in mice and about half that in humans. As of 2019, 260 imprinted genes have been reported in mice and 228 in humans.
Genomic imprinting is an inheritance process independent of the classical Mendelian inheritance. It is an epigenetic process that involves DNA methylation and histone methylation without altering the genetic sequence. These epigenetic marks are established ("imprinted") in the germline (sperm or egg cells) of the parents and are maintained through mitotic cell divisions in the somatic cells of an organism.
Appropriate imprinting of certain genes is important for normal development. Human diseases involving genomic imprinting include Angelman, Prader–Willi, and Beckwith–Wiedemann syndromes. Methylation defects have also been associated with male infertility.
In diploid organisms (like humans), the somatic cells possess two copies of the genome, one inherited from the father and one from the mother. Each autosomal gene is therefore represented by two copies, or alleles, with one copy inherited from each parent at fertilization. The expressed allele is dependent upon its parental origin. For example, the gene encoding insulin-like growth factor 2 (IGF2/Igf2) is only expressed from the allele inherited from the father. Although imprinting accounts for a small proportion of mammalian genes, they play an important role in embryogenesis particularly in the formation of visceral structures and the nervous system.
The term "imprinting" was first used to describe events in the insect Pseudococcus nipae. In Pseudococcids (mealybugs) (Hemiptera, Coccoidea) both the male and female develop from a fertilised egg. In females, all chromosomes remain euchromatic and functional. In embryos destined to become males, one haploid set of chromosomes becomes heterochromatinised after the sixth cleavage division and remains so in most tissues; males are thus functionally haploid.
That imprinting might be a feature of mammalian development was suggested in breeding experiments in mice carrying reciprocal chromosomal translocations. Nucleus transplantation experiments in mouse zygotes in the early 1980s confirmed that normal development requires the contribution of both the maternal and paternal genomes. The vast majority of mouse embryos derived from parthenogenesis (called parthenogenones, with two maternal or egg genomes) and androgenesis (called androgenones, with two paternal or sperm genomes) die at or before the blastocyst/implantation stage. In the rare instances that they develop to postimplantation stages, gynogenetic embryos show better embryonic development relative to placental development, while for androgenones, the reverse is true. Nevertheless, for the latter, only a few have been described (in a 1984 paper). Nevertheless, in 2018 genome editing allowed for bipaternal and viable bimaternal mouse and even (in 2022) parthenogenesis, still this is far from full reimprinting. Finally in March 2023 viable bipaternal ebryos were created.
No naturally occurring cases of parthenogenesis exist in mammals because of imprinted genes. However, in 2004, experimental manipulation by Japanese researchers of a paternal methylation imprint controlling the Igf2 gene led to the birth of a mouse (named Kaguya) with two maternal sets of chromosomes, though it is not a true parthenogenone since cells from two different female mice were used. The researchers were able to succeed by using one egg from an immature parent, thus reducing maternal imprinting, and modifying it to express the gene Igf2, which is normally only expressed by the paternal copy of the gene.
Parthenogenetic/gynogenetic embryos have twice the normal expression level of maternally derived genes, and lack expression of paternally expressed genes, while the reverse is true for androgenetic embryos. It is now known that there are at least 80 imprinted genes in humans and mice, many of which are involved in embryonic and placental growth and development. Hybrid offspring of two species may exhibit unusual growth due to the novel combination of imprinted genes.
Various methods have been used to identify imprinted genes. In swine, Bischoff et al. compared transcriptional profiles using DNA microarrays to survey differentially expressed genes between parthenotes (2 maternal genomes) and control fetuses (1 maternal, 1 paternal genome). An intriguing study surveying the transcriptome of murine brain tissues revealed over 1300 imprinted gene loci (approximately 10-fold more than previously reported) by RNA-sequencing from F1 hybrids resulting from reciprocal crosses. The result however has been challenged by others who claimed that this is an overestimation by an order of magnitude due to flawed statistical analysis.
In domesticated livestock, single-nucleotide polymorphisms in imprinted genes influencing foetal growth and development have been shown to be associated with economically important production traits in cattle, sheep and pigs.
At the same time as the generation of the gynogenetic and androgenetic embryos discussed above, mouse embryos were also being generated that contained only small regions that were derived from either a paternal or maternal source. The generation of a series of such uniparental disomies, which together span the entire genome, allowed the creation of an imprinting map. Those regions which when inherited from a single parent result in a discernible phenotype contain imprinted gene(s). Further research showed that within these regions there were often numerous imprinted genes. Around 80% of imprinted genes are found in clusters such as these, called imprinted domains, suggesting a level of co-ordinated control. More recently, genome-wide screens to identify imprinted genes have used differential expression of mRNAs from control fetuses and parthenogenetic or androgenetic fetuses hybridized to gene expression profiling microarrays, allele-specific gene expression using SNP genotyping microarrays, transcriptome sequencing, and in silico prediction pipelines.
Imprinting is a dynamic process. It must be possible to erase and re-establish imprints through each generation so that genes that are imprinted in an adult may still be expressed in that adult's offspring. (For example, the maternal genes that control insulin production will be imprinted in a male but will be expressed in any of the male's offspring that inherit these genes.) The nature of imprinting must therefore be epigenetic rather than DNA sequence dependent. In germline cells the imprint is erased and then re-established according to the sex of the individual, i.e. in the developing sperm (during spermatogenesis), a paternal imprint is established, whereas in developing oocytes (oogenesis), a maternal imprint is established. This process of erasure and reprogramming is necessary such that the germ cell imprinting status is relevant to the sex of the individual. In both plants and mammals there are two major mechanisms that are involved in establishing the imprint; these are DNA methylation and histone modifications.
Recently, a new study has suggested a novel inheritable imprinting mechanism in humans that would be specific of placental tissue and that is independent of DNA methylation (the main and classical mechanism for genomic imprinting). This was observed in humans, but not in mice, suggesting development after the evolutionary divergence of humans and mice, ~80 Mya. Among the hypothetical explanations for this novel phenomenon, two possible mechanisms have been proposed: either a histone modification that confers imprinting at novel placental-specific imprinted loci or, alternatively, a recruitment of DNMTs to these loci by a specific and unknown transcription factor that would be expressed during early trophoblast differentiation.
The grouping of imprinted genes within clusters allows them to share common regulatory elements, such as non-coding RNAs and differentially methylated regions (DMRs). When these regulatory elements control the imprinting of one or more genes, they are known as imprinting control regions (ICR). The expression of non-coding RNAs, such as antisense Igf2r RNA (Air) on mouse chromosome 17 and KCNQ1OT1 on human chromosome 11p15.5, have been shown to be essential for the imprinting of genes in their corresponding regions.
Differentially methylated regions are generally segments of DNA rich in cytosine and guanine nucleotides, with the cytosine nucleotides methylated on one copy but not on the other. Contrary to expectation, methylation does not necessarily mean silencing; instead, the effect of methylation depends upon the default state of the region.
The control of expression of specific genes by genomic imprinting is unique to therian mammals (placental mammals and marsupials) and flowering plants. Imprinting of whole chromosomes has been reported in mealybugs (Genus: Pseudococcus) and a fungus gnat (Sciara). It has also been established that X-chromosome inactivation occurs in an imprinted manner in the extra-embryonic tissues of mice and all tissues in marsupials, where it is always the paternal X-chromosome which is silenced.
The majority of imprinted genes in mammals have been found to have roles in the control of embryonic growth and development, including development of the placenta. Other imprinted genes are involved in post-natal development, with roles affecting suckling and metabolism.
A widely accepted hypothesis for the evolution of genomic imprinting is the "parental conflict hypothesis". Also known as the kinship theory of genomic imprinting, this hypothesis states that the inequality between parental genomes due to imprinting is a result of the differing interests of each parent in terms of the evolutionary fitness of their genes. The father's genes that encode for imprinting gain greater fitness through the success of the offspring, at the expense of the mother. The mother's evolutionary imperative is often to conserve resources for her own survival while providing sufficient nourishment to current and subsequent litters. Accordingly, paternally expressed genes tend to be growth-promoting whereas maternally expressed genes tend to be growth-limiting. In support of this hypothesis, genomic imprinting has been found in all placental mammals, where post-fertilisation offspring resource consumption at the expense of the mother is high; although it has also been found in oviparous birds where there is relatively little post-fertilisation resource transfer and therefore less parental conflict. A small number of imprinted genes are fast evolving under positive Darwinian selection possibly due to antagonistic co-evolution. The majority of imprinted genes display high levels of micro-synteny conservation and have undergone very few duplications in placental mammalian lineages.
However, our understanding of the molecular mechanisms behind genomic imprinting show that it is the maternal genome that controls much of the imprinting of both its own and the paternally-derived genes in the zygote, making it difficult to explain why the maternal genes would willingly relinquish their dominance to that of the paternally-derived genes in light of the conflict hypothesis.
Another hypothesis proposed is that some imprinted genes act coadaptively to improve both fetal development and maternal provisioning for nutrition and care. In it, a subset of paternally expressed genes are co-expressed in both the placenta and the mother's hypothalamus. This would come about through selective pressure from parent-infant coadaptation to improve infant survival. Paternally expressed 3 (PEG3) is a gene for which this hypothesis may apply.
Others have approached their study of the origins of genomic imprinting from a different side, arguing that natural selection is operating on the role of epigenetic marks as machinery for homologous chromosome recognition during meiosis, rather than on their role in differential expression. This argument centers on the existence of epigenetic effects on chromosomes that do not directly affect gene expression, but do depend on which parent the chromosome originated from. This group of epigenetic changes that depend on the chromosome's parent of origin (including both those that affect gene expression and those that do not) are called parental origin effects, and include phenomena such as paternal X inactivation in the marsupials, nonrandom parental chromatid distribution in the ferns, and even mating type switching in yeast. This diversity in organisms that show parental origin effects has prompted theorists to place the evolutionary origin of genomic imprinting before the last common ancestor of plants and animals, over a billion years ago.
Natural selection for genomic imprinting requires genetic variation in a population. A hypothesis for the origin of this genetic variation states that the host-defense system responsible for silencing foreign DNA elements, such as genes of viral origin, mistakenly silenced genes whose silencing turned out to be beneficial for the organism. There appears to be an over-representation of retrotransposed genes, that is to say genes that are inserted into the genome by viruses, among imprinted genes. It has also been postulated that if the retrotransposed gene is inserted close to another imprinted gene, it may just acquire this imprint.
Unfortunately, the relationship between the phenotype and genotype of imprinted genes is solely conceptual. The idea is frameworked using two alleles on a single locus and hosts three different possible classes of genotypes. The reciprocal heterozygotes genotype class contributes to understanding how imprinting will impact genotype to phenotype relationship. Reciprocal heterozygotes have a genetically equivalent, but they are phenotypically nonequivalent. Their phenotype may not be dependent on the equivalence of the genotype. This can ultimately increase diversity in genetic classes, expanding flexibility of imprinted genes. This increase will also force a higher degree in testing capabilities and assortment of tests to determine the presences of imprinting.
When a locus is identified as imprinted, two different classes express different alleles. Inherited imprinted genes of offspring are believed to be monoallelic expressions. A single locus will entirely produce one's phenotype although two alleles are inherited. This genotype class is called parental imprinting, as well as dominant imprinting. Phenotypic patterns are variant to possible expressions from paternal and maternal genotypes. Different alleles inherited from different parents will host different phenotypic qualities. One allele will have a larger phenotypic value and the other allele will be silenced. Underdominance of the locus is another possibility of phenotypic expression. Both maternal and paternal phenotypes will have a small value rather than one hosting a large value and silencing the other.
Statistical frameworks and mapping models are used to identify imprinting effects on genes and complex traits. Allelic parent-of -origin influences the vary in phenotype that derive from the imprinting of genotype classes. These models of mapping and identifying imprinting effects include using unordered genotypes to build mapping models. These models will show classic quantitative genetics and the effects of dominance of the imprinted genes.
Imprinting may cause problems in cloning, with clones having DNA that is not methylated in the correct positions. It is possible that this is due to a lack of time for reprogramming to be completely achieved. When a nucleus is added to an egg during somatic cell nuclear transfer, the egg starts dividing in minutes, as compared to the days or months it takes for reprogramming during embryonic development. If time is the responsible factor, it may be possible to delay cell division in clones, giving time for proper reprogramming to occur.
An allele of the "callipyge" (from the Greek for "beautiful buttocks"), or CLPG, gene in sheep produces large buttocks consisting of muscle with very little fat. The large-buttocked phenotype only occurs when the allele is present on the copy of chromosome 18 inherited from a sheep's father and is not on the copy of chromosome 18 inherited from that sheep's mother.
In vitro fertilisation, including ICSI, is associated with an increased risk of imprinting disorders, with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7).
Epigenetic deregulations at H19 imprinted gene in sperm have been observed associated with male infertility. Indeed, methylation loss at H19 imprinted gene has been observed associated with MTHFR gene promoter hypermethylation in semen samples from infertile males.
The first imprinted genetic disorders to be described in humans were the reciprocally inherited Prader-Willi syndrome and Angelman syndrome. Both syndromes are associated with loss of the chromosomal region 15q11-13 (band 11 of the long arm of chromosome 15). This region contains the paternally expressed genes SNRPN and NDN and the maternally expressed gene UBE3A.
DIRAS3 is a paternally expressed and maternally imprinted gene located on chromosome 1 in humans. Reduced DIRAS3 expression is linked to an increased risk of ovarian and breast cancers; in 41% of breast and ovarian cancers the protein encoded by DIRAS3 is not expressed, suggesting that it functions as a tumor suppressor gene. Therefore, if uniparental disomy occurs and a person inherits both chromosomes from the mother, the gene will not be expressed and the individual is put at a greater risk for breast and ovarian cancer.
Other conditions involving imprinting include Beckwith-Wiedemann syndrome, Silver-Russell syndrome, and pseudohypoparathyroidism.
Transient neonatal diabetes mellitus can also involve imprinting.
The "imprinted brain hypothesis" argues that unbalanced imprinting may be a cause of autism and psychosis.
In insects, imprinting affects entire chromosomes. In some insects the entire paternal genome is silenced in male offspring, and thus is involved in sex determination. The imprinting produces effects similar to the mechanisms in other insects that eliminate paternally inherited chromosomes in male offspring, including arrhenotoky.
In social honey bees, the parent of origin and allele-specific genes has been studied from reciprocal crosses to explore the epigenetic mechanisms underlying aggressive behavior.
In placental species, parent-offspring conflict can result in the evolution of strategies, such as genomic imprinting, for embryos to subvert maternal nutrient provisioning. Despite several attempts to find it, genomic imprinting has not been found in the platypus, reptiles, birds, or fish. The absence of genomic imprinting in a placental reptile, the Pseudemoia entrecasteauxii, is interesting as genomic imprinting was thought to be associated with the evolution of viviparity and placental nutrient transport.
Studies in domestic livestock, such as dairy and beef cattle, have implicated imprinted genes (e.g. IGF2) in a range of economic traits, including dairy performance in Holstein-Friesian cattle.
Foraging behavior in mice studied is influenced by a sexually dimorphic allele expression implicating a cross-gender imprinting influence that varies throughout the body and may dominate expression and shape a behavior.
A similar imprinting phenomenon has also been described in flowering plants (angiosperms). During fertilization of the egg cell, a second, separate fertilization event gives rise to the endosperm, an extraembryonic structure that nourishes the embryo in a manner analogous to the mammalian placenta. Unlike the embryo, the endosperm is often formed from the fusion of two maternal cells with a male gamete. This results in a triploid genome. The 2:1 ratio of maternal to paternal genomes appears to be critical for seed development. Some genes are found to be expressed from both maternal genomes while others are expressed exclusively from the lone paternal copy. It has been suggested that these imprinted genes are responsible for the triploid block effect in flowering plants that prevents hybridization between diploids and autotetraploids. Several computational methods to detect imprinting genes in plants from reciprocal crosses have been proposed. | [
{
"paragraph_id": 0,
"text": "Genomic imprinting is an epigenetic phenomenon that causes genes to be expressed or not, depending on whether they are inherited from the mother or the father. Genes can also be partially imprinted. Partial imprinting occurs when alleles from both parents are differently expressed rather than complete expression and complete suppression of one parent's allele. Forms of genomic imprinting have been demonstrated in fungi, plants and animals. In 2014, there were about 150 imprinted genes known in mice and about half that in humans. As of 2019, 260 imprinted genes have been reported in mice and 228 in humans.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Genomic imprinting is an inheritance process independent of the classical Mendelian inheritance. It is an epigenetic process that involves DNA methylation and histone methylation without altering the genetic sequence. These epigenetic marks are established (\"imprinted\") in the germline (sperm or egg cells) of the parents and are maintained through mitotic cell divisions in the somatic cells of an organism.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Appropriate imprinting of certain genes is important for normal development. Human diseases involving genomic imprinting include Angelman, Prader–Willi, and Beckwith–Wiedemann syndromes. Methylation defects have also been associated with male infertility.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In diploid organisms (like humans), the somatic cells possess two copies of the genome, one inherited from the father and one from the mother. Each autosomal gene is therefore represented by two copies, or alleles, with one copy inherited from each parent at fertilization. The expressed allele is dependent upon its parental origin. For example, the gene encoding insulin-like growth factor 2 (IGF2/Igf2) is only expressed from the allele inherited from the father. Although imprinting accounts for a small proportion of mammalian genes, they play an important role in embryogenesis particularly in the formation of visceral structures and the nervous system.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "The term \"imprinting\" was first used to describe events in the insect Pseudococcus nipae. In Pseudococcids (mealybugs) (Hemiptera, Coccoidea) both the male and female develop from a fertilised egg. In females, all chromosomes remain euchromatic and functional. In embryos destined to become males, one haploid set of chromosomes becomes heterochromatinised after the sixth cleavage division and remains so in most tissues; males are thus functionally haploid.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "That imprinting might be a feature of mammalian development was suggested in breeding experiments in mice carrying reciprocal chromosomal translocations. Nucleus transplantation experiments in mouse zygotes in the early 1980s confirmed that normal development requires the contribution of both the maternal and paternal genomes. The vast majority of mouse embryos derived from parthenogenesis (called parthenogenones, with two maternal or egg genomes) and androgenesis (called androgenones, with two paternal or sperm genomes) die at or before the blastocyst/implantation stage. In the rare instances that they develop to postimplantation stages, gynogenetic embryos show better embryonic development relative to placental development, while for androgenones, the reverse is true. Nevertheless, for the latter, only a few have been described (in a 1984 paper). Nevertheless, in 2018 genome editing allowed for bipaternal and viable bimaternal mouse and even (in 2022) parthenogenesis, still this is far from full reimprinting. Finally in March 2023 viable bipaternal ebryos were created.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 6,
"text": "No naturally occurring cases of parthenogenesis exist in mammals because of imprinted genes. However, in 2004, experimental manipulation by Japanese researchers of a paternal methylation imprint controlling the Igf2 gene led to the birth of a mouse (named Kaguya) with two maternal sets of chromosomes, though it is not a true parthenogenone since cells from two different female mice were used. The researchers were able to succeed by using one egg from an immature parent, thus reducing maternal imprinting, and modifying it to express the gene Igf2, which is normally only expressed by the paternal copy of the gene.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 7,
"text": "Parthenogenetic/gynogenetic embryos have twice the normal expression level of maternally derived genes, and lack expression of paternally expressed genes, while the reverse is true for androgenetic embryos. It is now known that there are at least 80 imprinted genes in humans and mice, many of which are involved in embryonic and placental growth and development. Hybrid offspring of two species may exhibit unusual growth due to the novel combination of imprinted genes.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 8,
"text": "Various methods have been used to identify imprinted genes. In swine, Bischoff et al. compared transcriptional profiles using DNA microarrays to survey differentially expressed genes between parthenotes (2 maternal genomes) and control fetuses (1 maternal, 1 paternal genome). An intriguing study surveying the transcriptome of murine brain tissues revealed over 1300 imprinted gene loci (approximately 10-fold more than previously reported) by RNA-sequencing from F1 hybrids resulting from reciprocal crosses. The result however has been challenged by others who claimed that this is an overestimation by an order of magnitude due to flawed statistical analysis.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 9,
"text": "In domesticated livestock, single-nucleotide polymorphisms in imprinted genes influencing foetal growth and development have been shown to be associated with economically important production traits in cattle, sheep and pigs.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 10,
"text": "At the same time as the generation of the gynogenetic and androgenetic embryos discussed above, mouse embryos were also being generated that contained only small regions that were derived from either a paternal or maternal source. The generation of a series of such uniparental disomies, which together span the entire genome, allowed the creation of an imprinting map. Those regions which when inherited from a single parent result in a discernible phenotype contain imprinted gene(s). Further research showed that within these regions there were often numerous imprinted genes. Around 80% of imprinted genes are found in clusters such as these, called imprinted domains, suggesting a level of co-ordinated control. More recently, genome-wide screens to identify imprinted genes have used differential expression of mRNAs from control fetuses and parthenogenetic or androgenetic fetuses hybridized to gene expression profiling microarrays, allele-specific gene expression using SNP genotyping microarrays, transcriptome sequencing, and in silico prediction pipelines.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 11,
"text": "Imprinting is a dynamic process. It must be possible to erase and re-establish imprints through each generation so that genes that are imprinted in an adult may still be expressed in that adult's offspring. (For example, the maternal genes that control insulin production will be imprinted in a male but will be expressed in any of the male's offspring that inherit these genes.) The nature of imprinting must therefore be epigenetic rather than DNA sequence dependent. In germline cells the imprint is erased and then re-established according to the sex of the individual, i.e. in the developing sperm (during spermatogenesis), a paternal imprint is established, whereas in developing oocytes (oogenesis), a maternal imprint is established. This process of erasure and reprogramming is necessary such that the germ cell imprinting status is relevant to the sex of the individual. In both plants and mammals there are two major mechanisms that are involved in establishing the imprint; these are DNA methylation and histone modifications.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 12,
"text": "Recently, a new study has suggested a novel inheritable imprinting mechanism in humans that would be specific of placental tissue and that is independent of DNA methylation (the main and classical mechanism for genomic imprinting). This was observed in humans, but not in mice, suggesting development after the evolutionary divergence of humans and mice, ~80 Mya. Among the hypothetical explanations for this novel phenomenon, two possible mechanisms have been proposed: either a histone modification that confers imprinting at novel placental-specific imprinted loci or, alternatively, a recruitment of DNMTs to these loci by a specific and unknown transcription factor that would be expressed during early trophoblast differentiation.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 13,
"text": "The grouping of imprinted genes within clusters allows them to share common regulatory elements, such as non-coding RNAs and differentially methylated regions (DMRs). When these regulatory elements control the imprinting of one or more genes, they are known as imprinting control regions (ICR). The expression of non-coding RNAs, such as antisense Igf2r RNA (Air) on mouse chromosome 17 and KCNQ1OT1 on human chromosome 11p15.5, have been shown to be essential for the imprinting of genes in their corresponding regions.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 14,
"text": "Differentially methylated regions are generally segments of DNA rich in cytosine and guanine nucleotides, with the cytosine nucleotides methylated on one copy but not on the other. Contrary to expectation, methylation does not necessarily mean silencing; instead, the effect of methylation depends upon the default state of the region.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 15,
"text": "The control of expression of specific genes by genomic imprinting is unique to therian mammals (placental mammals and marsupials) and flowering plants. Imprinting of whole chromosomes has been reported in mealybugs (Genus: Pseudococcus) and a fungus gnat (Sciara). It has also been established that X-chromosome inactivation occurs in an imprinted manner in the extra-embryonic tissues of mice and all tissues in marsupials, where it is always the paternal X-chromosome which is silenced.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 16,
"text": "The majority of imprinted genes in mammals have been found to have roles in the control of embryonic growth and development, including development of the placenta. Other imprinted genes are involved in post-natal development, with roles affecting suckling and metabolism.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 17,
"text": "A widely accepted hypothesis for the evolution of genomic imprinting is the \"parental conflict hypothesis\". Also known as the kinship theory of genomic imprinting, this hypothesis states that the inequality between parental genomes due to imprinting is a result of the differing interests of each parent in terms of the evolutionary fitness of their genes. The father's genes that encode for imprinting gain greater fitness through the success of the offspring, at the expense of the mother. The mother's evolutionary imperative is often to conserve resources for her own survival while providing sufficient nourishment to current and subsequent litters. Accordingly, paternally expressed genes tend to be growth-promoting whereas maternally expressed genes tend to be growth-limiting. In support of this hypothesis, genomic imprinting has been found in all placental mammals, where post-fertilisation offspring resource consumption at the expense of the mother is high; although it has also been found in oviparous birds where there is relatively little post-fertilisation resource transfer and therefore less parental conflict. A small number of imprinted genes are fast evolving under positive Darwinian selection possibly due to antagonistic co-evolution. The majority of imprinted genes display high levels of micro-synteny conservation and have undergone very few duplications in placental mammalian lineages.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 18,
"text": "However, our understanding of the molecular mechanisms behind genomic imprinting show that it is the maternal genome that controls much of the imprinting of both its own and the paternally-derived genes in the zygote, making it difficult to explain why the maternal genes would willingly relinquish their dominance to that of the paternally-derived genes in light of the conflict hypothesis.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 19,
"text": "Another hypothesis proposed is that some imprinted genes act coadaptively to improve both fetal development and maternal provisioning for nutrition and care. In it, a subset of paternally expressed genes are co-expressed in both the placenta and the mother's hypothalamus. This would come about through selective pressure from parent-infant coadaptation to improve infant survival. Paternally expressed 3 (PEG3) is a gene for which this hypothesis may apply.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 20,
"text": "Others have approached their study of the origins of genomic imprinting from a different side, arguing that natural selection is operating on the role of epigenetic marks as machinery for homologous chromosome recognition during meiosis, rather than on their role in differential expression. This argument centers on the existence of epigenetic effects on chromosomes that do not directly affect gene expression, but do depend on which parent the chromosome originated from. This group of epigenetic changes that depend on the chromosome's parent of origin (including both those that affect gene expression and those that do not) are called parental origin effects, and include phenomena such as paternal X inactivation in the marsupials, nonrandom parental chromatid distribution in the ferns, and even mating type switching in yeast. This diversity in organisms that show parental origin effects has prompted theorists to place the evolutionary origin of genomic imprinting before the last common ancestor of plants and animals, over a billion years ago.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 21,
"text": "Natural selection for genomic imprinting requires genetic variation in a population. A hypothesis for the origin of this genetic variation states that the host-defense system responsible for silencing foreign DNA elements, such as genes of viral origin, mistakenly silenced genes whose silencing turned out to be beneficial for the organism. There appears to be an over-representation of retrotransposed genes, that is to say genes that are inserted into the genome by viruses, among imprinted genes. It has also been postulated that if the retrotransposed gene is inserted close to another imprinted gene, it may just acquire this imprint.",
"title": "Imprinted genes in mammals"
},
{
"paragraph_id": 22,
"text": "Unfortunately, the relationship between the phenotype and genotype of imprinted genes is solely conceptual. The idea is frameworked using two alleles on a single locus and hosts three different possible classes of genotypes. The reciprocal heterozygotes genotype class contributes to understanding how imprinting will impact genotype to phenotype relationship. Reciprocal heterozygotes have a genetically equivalent, but they are phenotypically nonequivalent. Their phenotype may not be dependent on the equivalence of the genotype. This can ultimately increase diversity in genetic classes, expanding flexibility of imprinted genes. This increase will also force a higher degree in testing capabilities and assortment of tests to determine the presences of imprinting.",
"title": "Imprinted Loci Phenotypic Signatures"
},
{
"paragraph_id": 23,
"text": "When a locus is identified as imprinted, two different classes express different alleles. Inherited imprinted genes of offspring are believed to be monoallelic expressions. A single locus will entirely produce one's phenotype although two alleles are inherited. This genotype class is called parental imprinting, as well as dominant imprinting. Phenotypic patterns are variant to possible expressions from paternal and maternal genotypes. Different alleles inherited from different parents will host different phenotypic qualities. One allele will have a larger phenotypic value and the other allele will be silenced. Underdominance of the locus is another possibility of phenotypic expression. Both maternal and paternal phenotypes will have a small value rather than one hosting a large value and silencing the other.",
"title": "Imprinted Loci Phenotypic Signatures"
},
{
"paragraph_id": 24,
"text": "Statistical frameworks and mapping models are used to identify imprinting effects on genes and complex traits. Allelic parent-of -origin influences the vary in phenotype that derive from the imprinting of genotype classes. These models of mapping and identifying imprinting effects include using unordered genotypes to build mapping models. These models will show classic quantitative genetics and the effects of dominance of the imprinted genes.",
"title": "Imprinted Loci Phenotypic Signatures"
},
{
"paragraph_id": 25,
"text": "Imprinting may cause problems in cloning, with clones having DNA that is not methylated in the correct positions. It is possible that this is due to a lack of time for reprogramming to be completely achieved. When a nucleus is added to an egg during somatic cell nuclear transfer, the egg starts dividing in minutes, as compared to the days or months it takes for reprogramming during embryonic development. If time is the responsible factor, it may be possible to delay cell division in clones, giving time for proper reprogramming to occur.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 26,
"text": "An allele of the \"callipyge\" (from the Greek for \"beautiful buttocks\"), or CLPG, gene in sheep produces large buttocks consisting of muscle with very little fat. The large-buttocked phenotype only occurs when the allele is present on the copy of chromosome 18 inherited from a sheep's father and is not on the copy of chromosome 18 inherited from that sheep's mother.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 27,
"text": "In vitro fertilisation, including ICSI, is associated with an increased risk of imprinting disorders, with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7).",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 28,
"text": "Epigenetic deregulations at H19 imprinted gene in sperm have been observed associated with male infertility. Indeed, methylation loss at H19 imprinted gene has been observed associated with MTHFR gene promoter hypermethylation in semen samples from infertile males.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 29,
"text": "The first imprinted genetic disorders to be described in humans were the reciprocally inherited Prader-Willi syndrome and Angelman syndrome. Both syndromes are associated with loss of the chromosomal region 15q11-13 (band 11 of the long arm of chromosome 15). This region contains the paternally expressed genes SNRPN and NDN and the maternally expressed gene UBE3A.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 30,
"text": "DIRAS3 is a paternally expressed and maternally imprinted gene located on chromosome 1 in humans. Reduced DIRAS3 expression is linked to an increased risk of ovarian and breast cancers; in 41% of breast and ovarian cancers the protein encoded by DIRAS3 is not expressed, suggesting that it functions as a tumor suppressor gene. Therefore, if uniparental disomy occurs and a person inherits both chromosomes from the mother, the gene will not be expressed and the individual is put at a greater risk for breast and ovarian cancer.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 31,
"text": "Other conditions involving imprinting include Beckwith-Wiedemann syndrome, Silver-Russell syndrome, and pseudohypoparathyroidism.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 32,
"text": "Transient neonatal diabetes mellitus can also involve imprinting.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 33,
"text": "The \"imprinted brain hypothesis\" argues that unbalanced imprinting may be a cause of autism and psychosis.",
"title": "Disorders associated with imprinting"
},
{
"paragraph_id": 34,
"text": "In insects, imprinting affects entire chromosomes. In some insects the entire paternal genome is silenced in male offspring, and thus is involved in sex determination. The imprinting produces effects similar to the mechanisms in other insects that eliminate paternally inherited chromosomes in male offspring, including arrhenotoky.",
"title": "Imprinted genes in other animals"
},
{
"paragraph_id": 35,
"text": "In social honey bees, the parent of origin and allele-specific genes has been studied from reciprocal crosses to explore the epigenetic mechanisms underlying aggressive behavior.",
"title": "Imprinted genes in other animals"
},
{
"paragraph_id": 36,
"text": "In placental species, parent-offspring conflict can result in the evolution of strategies, such as genomic imprinting, for embryos to subvert maternal nutrient provisioning. Despite several attempts to find it, genomic imprinting has not been found in the platypus, reptiles, birds, or fish. The absence of genomic imprinting in a placental reptile, the Pseudemoia entrecasteauxii, is interesting as genomic imprinting was thought to be associated with the evolution of viviparity and placental nutrient transport.",
"title": "Imprinted genes in other animals"
},
{
"paragraph_id": 37,
"text": "Studies in domestic livestock, such as dairy and beef cattle, have implicated imprinted genes (e.g. IGF2) in a range of economic traits, including dairy performance in Holstein-Friesian cattle.",
"title": "Imprinted genes in other animals"
},
{
"paragraph_id": 38,
"text": "Foraging behavior in mice studied is influenced by a sexually dimorphic allele expression implicating a cross-gender imprinting influence that varies throughout the body and may dominate expression and shape a behavior.",
"title": "Imprinted genes in other animals"
},
{
"paragraph_id": 39,
"text": "A similar imprinting phenomenon has also been described in flowering plants (angiosperms). During fertilization of the egg cell, a second, separate fertilization event gives rise to the endosperm, an extraembryonic structure that nourishes the embryo in a manner analogous to the mammalian placenta. Unlike the embryo, the endosperm is often formed from the fusion of two maternal cells with a male gamete. This results in a triploid genome. The 2:1 ratio of maternal to paternal genomes appears to be critical for seed development. Some genes are found to be expressed from both maternal genomes while others are expressed exclusively from the lone paternal copy. It has been suggested that these imprinted genes are responsible for the triploid block effect in flowering plants that prevents hybridization between diploids and autotetraploids. Several computational methods to detect imprinting genes in plants from reciprocal crosses have been proposed.",
"title": "Imprinted genes in plants"
}
]
| Genomic imprinting is an epigenetic phenomenon that causes genes to be expressed or not, depending on whether they are inherited from the mother or the father. Genes can also be partially imprinted. Partial imprinting occurs when alleles from both parents are differently expressed rather than complete expression and complete suppression of one parent's allele. Forms of genomic imprinting have been demonstrated in fungi, plants and animals. In 2014, there were about 150 imprinted genes known in mice and about half that in humans. As of 2019, 260 imprinted genes have been reported in mice and 228 in humans. Genomic imprinting is an inheritance process independent of the classical Mendelian inheritance. It is an epigenetic process that involves DNA methylation and histone methylation without altering the genetic sequence. These epigenetic marks are established ("imprinted") in the germline of the parents and are maintained through mitotic cell divisions in the somatic cells of an organism. Appropriate imprinting of certain genes is important for normal development. Human diseases involving genomic imprinting include Angelman, Prader–Willi, and Beckwith–Wiedemann syndromes. Methylation defects have also been associated with male infertility. | 2001-11-11T17:00:30Z | 2023-12-31T11:30:53Z | [
"Template:Anchor",
"Template:Cn",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Genomic imprinting",
"Template:Short description",
"Template:Cite journal",
"Template:Closed access",
"Template:Cite book",
"Template:MeshName",
"Template:Gene expression"
]
| https://en.wikipedia.org/wiki/Genomic_imprinting |
15,236 | ICANN | The Internet Corporation for Assigned Names and Numbers (ICANN /ˈaɪkæn/ EYE-kan) is an American multistakeholder group and nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces and numerical spaces of the Internet, ensuring the network's stable and secure operation. ICANN performs the actual technical maintenance work of the Central Internet Address pools and DNS root zone registries pursuant to the Internet Assigned Numbers Authority (IANA) function contract. The contract regarding the IANA stewardship functions between ICANN and the National Telecommunications and Information Administration (NTIA) of the United States Department of Commerce ended on October 1, 2016, formally transitioning the functions to the global multistakeholder community.
Much of its work has concerned the Internet's global Domain Name System (DNS), including policy development for internationalization of the DNS, introduction of new generic top-level domains (TLDs), and the operation of root name servers. The numbering facilities ICANN manages include the Internet Protocol address spaces for IPv4 and IPv6, and assignment of address blocks to regional Internet registries. ICANN also maintains registries of Internet Protocol identifiers.
ICANN's primary principles of operation have been described as helping preserve the operational stability of the Internet; to promote competition; to achieve broad representation of the global Internet community; and to develop policies appropriate to its mission through bottom-up, consensus-based processes. The organization has often included a motto of "One World. One Internet." on annual reports beginning in 2010, on less formal publications, as well as their official website.
ICANN was officially incorporated in the state of California on September 30, 1998. Originally headquartered in Marina del Rey in the same building as the University of Southern California's Information Sciences Institute (ISI), its offices are now in the Playa Vista neighborhood of Los Angeles.
Before the establishment of ICANN, the IANA function of administering registries of Internet protocol identifiers (including the distributing top-level domains and IP addresses) was performed by Jon Postel, a computer science researcher who had been involved in the creation of ARPANET, first at UCLA and then at USC-ISI. In 1997 Postel testified before Congress that this had come about as a "side task" to this research work. The Information Sciences Institute was funded by the U.S. Department of Defense, as was SRI International's Network Information Center, which also performed some assigned name functions.
As the Internet grew and expanded globally, the U.S. Department of Commerce initiated a process to establish a new organization to perform the IANA functions. On January 30, 1998, the National Telecommunications and Information Administration (NTIA), an agency of the U.S. Department of Commerce, issued for comment, "A Proposal to Improve the Technical Management of Internet Names and Addresses." The proposed rule making, or "Green Paper", was published in the Federal Register on February 20, 1998, providing opportunity for public comment. NTIA received more than 650 comments as of March 23, 1998, when the comment period closed.
The Green Paper proposed certain actions designed to privatize the management of Internet names and addresses in a manner that allows for the development of competition and facilitates global participation in Internet management. The Green Paper proposed for discussion a variety of issues relating to DNS management including private sector creation of a new not-for-profit corporation (the "new corporation") managed by a globally and functionally representative board of directors. ICANN was formed in response to this policy. ICANN managed the Internet Assigned Numbers Authority (IANA) under contract to the United States Department of Commerce (DOC) and pursuant to an agreement with the IETF.
ICANN was incorporated in California on September 30, 1998, with entrepreneur and philanthropist Esther Dyson as founding chairwoman. It is a nonprofit public benefit corporation "organized under the California Nonprofit Public Benefit Corporation Law for charitable and public purposes." ICANN was established in California due to the presence of Jon Postel, who was a founder of ICANN and was set to be its first Chief Technology Officer prior to his unexpected death. ICANN formerly operated from the same Marina del Rey building where Postel formerly worked, which is home to an office of the Information Sciences Institute at the University of Southern California. However, ICANN's headquarters is now located in the nearby Playa Vista neighborhood of Los Angeles.
Per its original by-laws, primary responsibility for policy formation in ICANN was to be delegated to three supporting organizations (Address Supporting Organization, Domain Name Supporting Organization, and Protocol Supporting Organization), each of which was to develop and recommend substantive policies and procedures for the management of the identifiers within their respective scope. They were also required to be financially independent from ICANN. As expected, the regional Internet registries and the IETF agreed to serve as the Address Supporting Organization and Protocol Supporting Organization respectively, and ICANN issued a call for interested parties to propose the structure and composition of the Domain Name Supporting Organization. In March 1999, the ICANN Board, based in part on the DNSO proposals received, decided instead on an alternate construction for the DNSO which delineated specific constituencies bodies within ICANN itself, thus adding primary responsibility for DNS policy development to ICANN's existing duties of oversight and coordination.
On July 26, 2006, the United States government renewed the contract with ICANN for performance of the IANA function for an additional one to five years. The context of ICANN's relationship with the U.S. government was clarified on September 29, 2006, when ICANN signed a new memorandum of understanding with the United States Department of Commerce (DOC). This document gave the DOC oversight over some of the ICANN operations.
In July 2008, the DOC reiterated an earlier statement that it has "no plans to transition management of the authoritative root zone file to ICANN". The letter also stresses the separate roles of the IANA and VeriSign.
On September 30, 2009, ICANN signed an agreement with the DOC (known as the "Affirmation of Commitments") that confirmed ICANN's commitment to a multistakeholder governance model, but did not remove it from DOC oversight and control. The Affirmation of Commitments, which aimed to create international oversight, ran into criticism.
On March 10, 2016, ICANN and the DOC signed a historic, culminating agreement to finally remove ICANN and IANA from the control and oversight of the DOC. On October 1, 2016, ICANN was freed from U.S. government oversight.
Since its creation, ICANN has been the subject of criticism and controversy. In 2000, professor Michael Froomkin of the University of Miami School of Law argued that ICANN's relationship with the U.S. Department of Commerce is illegal, in violation of either the Constitution or federal statutes.
On March 18, 2002, publicly elected At-Large Representative for North America board member Karl Auerbach sued ICANN in Superior Court in California to gain access to ICANN's accounting records without restriction. Auerbach won.
During September and October 2003, ICANN played a crucial role in the conflict over VeriSign's "wild card" DNS service Site Finder. After an open letter from ICANN issuing an ultimatum to VeriSign, later endorsed by the Internet Architecture Board, the company voluntarily ended the service on October 4, 2003. After this action, VeriSign filed a lawsuit against ICANN on February 27, 2004, claiming that ICANN had exceeded its authority. By this lawsuit, VeriSign sought to reduce ambiguity about ICANN's authority. The antitrust component of VeriSign's claim was dismissed during August 2004. VeriSign's challenge that ICANN overstepped its contractual rights is currently outstanding. A proposed settlement already approved by ICANN's board would resolve VeriSign's challenge to ICANN in exchange for the right to increase pricing on .com domains. At the meeting of ICANN in Rome, which took place from March 2 to 6, 2004, ICANN agreed to ask approval of the U.S. Department of Commerce for the Waiting List Service of VeriSign.
On May 17, 2004, ICANN published a proposed budget for the year 2004–05. It included proposals to increase the openness and professionalism of its operations, and greatly increased its proposed spending from US$8.27 million to $15.83 million. The increase was to be funded by the introduction of new top-level domains, charges to domain registries, and a fee for some domain name registrations, renewals and transfers (initially US$0.20 for all domains within a country-code top-level domain, and US$0.25 for all others). The Council of European National Top Level Domain Registries (CENTR), which represents the Internet registries of 39 countries, rejected the increase, accusing ICANN of a lack of financial prudence and criticizing what it describes as ICANN's "unrealistic political and operational targets". Despite the criticism, the registry agreement for the top-level domains jobs and travel includes a US$2 fee on every domain the licensed companies sell or renew.
After a second round of negotiations during 2004, the TLDs eu, asia, travel, jobs, mobi, and cat were introduced during 2005.
On February 28, 2006, ICANN's board approved a settlement with VeriSign in the lawsuit resulting from SiteFinder that involved allowing VeriSign (the registry) to raise its registration fees by up to 7% a year. This was criticised by a few members of the U.S. House of Representatives' Small Business Committee.
During February 2007, ICANN began procedures to end accreditation of one of their registrars, RegisterFly amid charges and lawsuits involving fraud, and criticism of ICANN's management of the situation. ICANN has been the subject of criticism as a result of its handling of RegisterFly, and the harm caused to thousands of clients as a result of what has been termed ICANN's "laissez faire attitude toward customer allegations of fraud".
On May 23, 2008, ICANN issued enforcement notices against ten accredited registrars and announced this through a press release entitled "'Worst Spam Offenders' Notified by ICANN, Compliance system working to correct Whois and other issues." This was largely in response to a report issued by KnujOn, called "The 10 Worst Registrars" in terms of spam advertised junk product sites and compliance failure. The mention of the word "spam" in the title of the ICANN memo is somewhat misleading since ICANN does not address issues of spam or email abuse. Website content and usage are not within ICANN's mandate. However, the KnujOn report details how various registrars have not complied with their contractual obligations under the Registrar Accreditation Agreement (RAA). The main point of the KnujOn research was to demonstrate the relationships between compliance failure, illicit product traffic, and spam. The report demonstrated that out of 900 ICANN accredited registrars, fewer than 20 held 90% of the web domains advertised in spam. These same registrars were also most frequently cited by KnujOn as failing to resolve complaints made through the Whois Data Problem Reporting System (WDPRS).
On June 26, 2008, the ICANN Board started a new process of TLD naming policy to take a "significant step forward on the introduction of new generic top-level domains." This program envisioned the availability of many new or already proposed domains, as well a new application and implementation process.
On October 1, 2008, ICANN issued breach notices against Joker and Beijing Innovative Linkage Technology Ltd. after further researching reports and complaints issued by KnujOn. These notices gave the registrars 15 days to fix their Whois investigation efforts.
In 2010, ICANN approved a major review of its policies with respect to accountability, transparency, and public participation by the Berkman Center for Internet and Society at Harvard University. This external review was an assistance of the work of ICANN's Accountability and Transparency Review team.
On February 3, 2011, ICANN announced that it had distributed the last batch of its remaining IPv4 addresses to the world's five regional Internet registries, the organizations that manage IP addresses in different regions. These registries began assigning the final IPv4 addresses within their regions until they ran out completely.
On June 20, 2011, the ICANN board voted to end most restrictions on the names of generic top-level domains (gTLD). Companies and organizations became able to choose essentially arbitrary top-level Internet domain names. The use of non-Latin characters (such as Cyrillic, Arabic, Chinese, etc.) is also allowed in gTLDs. ICANN began accepting applications for new gTLDS on January 12, 2012. The initial price to apply for a new gTLD was set at $185,000 and the annual renewal fee is $25,000.
During December 2011, the Federal Trade Commission stated ICANN had long failed to provide safeguards that protect consumers from online swindlers.
Following the 2013 NSA spying scandal, ICANN endorsed the Montevideo Statement, although no direct connection between these could be proven.
On October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA) and entered the private sector.
The European Union's General Data Protection Regulation (active since May 25, 2018) impacted on ICANN operations, which the latter tried to fix through last-minute changes.
From its founding to the present, ICANN has been formally organized as a nonprofit corporation "for charitable and public purposes" under the California Nonprofit Public Benefit Corporation Law. It is managed by a 16-member board of directors composed of eight members selected by a nominating committee on which all the constituencies of ICANN are represented; six representatives of its Supporting Organizations, sub-groups that deal with specific sections of the policies under ICANN's purview; an at-large seat filled by an at-large organization; and the president / CEO, appointed by the board.
There are currently three supporting organizations: the Generic Names Supporting Organization (GNSO) deals with policy making on generic top-level domains (gTLDs); the Country Code Names Supporting Organization (ccNSO) deals with policy making on country-code top-level domains (ccTLDs); the Address Supporting Organization (ASO) deals with policy making on IP addresses.
ICANN also relies on some advisory committees and other advisory mechanisms to receive advice on the interests and needs of stakeholders that do not directly participate in the Supporting Organizations. These include the Governmental Advisory Committee (GAC), which is composed of representatives of a large number of national governments from all over the world; the At-Large Advisory Committee (ALAC), which is composed of individual Internet users from around the world selected by each of the Regional At-Large Organizations (RALO) and Nominating Committee; the Root Server System Advisory Committee, which provides advice on the operation of the DNS root server system; the Security and Stability Advisory Committee (SSAC), which is composed of Internet experts who study security issues pertaining to ICANN's mandate; and the Technical Liaison Group (TLG), which is composed of representatives of other international technical organizations that focus, at least in part, on the Internet.
The Governmental Advisory Committee has representatives from 179 states and 38 Observer organizations, including the Holy See, Cook Islands, Niue, Taiwan, Hong Kong, Bermuda, Montserrat, the European Commission and the African Union Commission.
In addition the following organizations are GAC Observers:
As the operator of the IANA domain name functions, ICANN is responsible for the DNSSEC management of the root zone. While day-to-day operations are managed by ICANN and Verisign, the trust is rooted in a group of Trusted Community Representatives. The members of this group must not be affiliated with ICANN, but are instead members of the broader DNS community, volunteering to become a Trusted Community Representative. The role of the representatives are primarily to take part in regular key ceremonies at a physical location, organized by ICANN, and to safeguard the key materials in between.
In the Memorandum of understanding that set up the relationship between ICANN and the U.S. government, ICANN was given a mandate requiring that it operate "in a bottom up, consensus driven, democratic manner." However, the attempts that ICANN have made to establish an organizational structure that would allow wide input from the global Internet community did not produce results amenable to the current Board. As a result, the At-Large constituency and direct election of board members by the global Internet community were soon abandoned.
ICANN holds periodic public meetings rotated between continents for the purpose of encouraging global participation in its processes. Resolutions of the ICANN Board, preliminary reports, and minutes of the meetings, are published on the ICANN website, sometimes in real time. However, there are criticisms from ICANN constituencies including the Noncommercial Users Constituency (NCUC) and the At-Large Advisory Committee (ALAC) that there is not enough public disclosure and that too many discussions and decisions take place out of sight of the public.
During the early 2000s, there had been speculation that the United Nations might assume control of ICANN, followed by a negative reaction from the U.S. government and worries about a division of the Internet. The World Summit on the Information Society in Tunisia during November 2005 agreed not to get involved in the day-to-day and technical operations of ICANN. However it also agreed to establish an international Internet Governance Forum, with a consultative role on the future governance of the Internet. ICANN's Government Advisory Committee is currently established to provide advice to ICANN regarding public policy issues and has participation by many of the world's governments.
Some have attempted to argue that ICANN was never given the authority to decide policy, e.g., choose new TLDs or exclude other interested parties who refuse to pay ICANN's US$185,000 fee, but was to be a technical caretaker. Critics suggest that ICANN should not be allowed to impose business rules on market participants, and that all TLDs should be added on a first-come, first-served basis and the market should be the arbiter of who succeeds and who does not.
One task that ICANN was asked to do was to address the issue of domain name ownership resolution for generic top-level domains (gTLDs). ICANN's attempt at such a policy was drafted in close cooperation with the World Intellectual Property Organization (WIPO), and the result has now become known as the Uniform Dispute Resolution Policy (UDRP). This policy essentially attempts to provide a mechanism for rapid, cheap and reasonable resolution of domain name conflicts, avoiding the traditional court system for disputes by allowing cases to be brought to one of a set of bodies that arbitrate domain name disputes. According to ICANN policy, domain registrants must agree to be bound by the UDRP—they cannot get a domain name without agreeing to this.
Examination of the UDRP decision patterns has caused some to conclude that compulsory domain name arbitration is less likely to give a fair hearing to domain name owners asserting defenses under the First Amendment and other laws, compared to the federal courts of appeal in particular.
In 2013, the initial report of ICANN's Expert Working Group has recommended that the present form of Whois, a utility that allows anyone to know who has registered a domain name on the Internet, should be "abandoned". It recommends it be replaced with a system that keeps most registration information secret (or "gated") from most Internet users, and only discloses information for "permissible purposes". ICANN's list of permissible purposes includes domain name research, domain name sale and purchase, regulatory enforcement, personal data protection, legal actions, and abuse mitigation. Whois has been a key tool of investigative journalists interested in determining who was disseminating information on the Internet. The use of whois by journalists is not included in the list of permissible purposes in the initial report.
Proposals have been made to internationalize ICANN's monitoring responsibilities (currently the responsibility of the US), to transform it into an international organization (under international law), and to "establish an intergovernmental mechanism enabling governments, on an equal footing, to carry out their role and responsibilities in international public policy issues pertaining to the Internet".
One controversial proposal, resulting from a September 2011 summit between India, Brazil, and South Africa (IBSA), would seek to move Internet governance into a "UN Committee on Internet-Related Policy" (UN-CIRP). The action was a reaction to a perception that the principles of the 2005 Tunis Agenda for the Information Society have not been met. The statement proposed the creation of a new political organization operating as a component of the United Nations to provide policy recommendations for the consideration of technical organizations such as ICANN and international bodies such as the ITU. Subsequent to public criticisms, the Indian government backed away from the proposal.
On October 7, 2013, the Montevideo Statement on the Future of Internet Cooperation was released by the managers of a number of organizations involved in coordinating the Internet's global technical infrastructure, loosely known as the "I*" (or "I-star") group. Among other things, the statement "expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance" and "called for accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing". This desire to reduce United States association with the internet is considered a reaction to the ongoing NSA surveillance scandal. The statement was signed by the managers of the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society, and the five regional Internet address registries (African Network Information Center, American Registry for Internet Numbers, Asia-Pacific Network Information Centre, Latin America and Caribbean Internet Addresses Registry, and Réseaux IP Européens Network Coordination Centre).
During October 2013, Fadi Chehadé, former president and CEO of ICANN, met with Brazilian president Dilma Rousseff in Brasilia. Upon Chehadé's invitation, the two announced that Brazil would host an international summit on Internet governance during April 2014. The announcement came after the 2013 disclosures of mass surveillance by the U.S. government, and Rousseff's speech at the opening session of the 2013 United Nations General Assembly, where she strongly criticized the American surveillance program as a "breach of international law". The "Global Multistakeholder Meeting on the Future of Internet Governance (NET mundial)" will include representatives of government, industry, civil society, and academia. At the IGF VIII meeting in Bali in October 2013 a commenter noted that Brazil intends the meeting to be a "summit" in the sense that it will be high level with decision-making authority. The organizers of the "NET mundial" meeting have decided that an online forum called "/1net", set up by the I* group, will be a major conduit of non-governmental input into the three committees preparing for the meeting in April.
The Obama administration that had joined critics of ICANN during 2011 announced in March 2014 that they intended to transition away from oversight of the IANA functions contract. The current contract that the United States Department of Commerce has with ICANN expired in 2015, in its place the NTIA will transition oversight of the IANA functions to the 'global multistakeholder community'.
The NetMundial Initiative is a plan for international governance of the Internet that was first proposed at the Global Multistakeholder Meeting on the Future of Internet Governance (GMMFIG) conference (April 23–24, 2014) and later developed into the NetMundial Initiative by ICANN CEO Fadi Chehadé along with representatives of the World Economic Forum (WEF) and the Brazilian Internet Steering Committee (Comitê Gestor da Internet no Brasil), commonly referred to as "CGI.br".
The meeting produced a nonbinding statement in favor of consensus-based decision-making. It represented a compromise and did not harshly condemn mass surveillance or support net neutrality, despite initial endorsement for that from Brazil. The final resolution says ICANN should be controlled internationally by September 2015. A minority of governments, including Russia, China, Iran and India, were unhappy with the final resolution and wanted multilateral management for the Internet, rather than broader multistakeholder management.
A month later, the Panel on Global Internet Cooperation and Governance Mechanisms (convened by the Internet Corporation for Assigned Names and Numbers (ICANN) and the World Economic Forum (WEF) with assistance from The Annenberg Foundation), endorsed and included the NetMundial statement in its own report.
During June 2014, France strongly attacked ICANN, saying ICANN is not a fit venue for Internet governance and that alternatives should be sought.
During 2011, seventy-nine companies, including The Coca-Cola Company, Hewlett-Packard, Samsung and others, signed a petition against ICANN's new TLD program (sometimes referred to as a "commercial landgrab"), in a group organized by the Association of National Advertisers. As of September 2014, this group, the Coalition for Responsible Internet Domain Oversight, that opposes the rollout of ICANN's TLD expansion program, has been joined by 102 associations and 79 major companies. Partly as a response to this criticism, ICANN initiated an effort to protect trademarks in domain name registrations, which eventually culminated in the establishment of the Trademark Clearinghouse.
ICANN has received more than $60 million from gTLD auctions, and has accepted the controversial domain name ".sucks" (referring to the primarily US slang for being inferior or objectionable). sucks domains are owned and controlled by the Vox Populi Registry which won the rights for .sucks gTLD in November 2014.
The .sucks domain registrar has been described as "predatory, exploitive and coercive" by the Intellectual Property Constituency that advises the ICANN board. When the .sucks registry announced their pricing model, "most brand owners were upset and felt like they were being penalized by having to pay more to protect their brands." Because of the low utility of the ".sucks" domain, most fees come from "Brand Protection" customers registering their trademarks to prevent domains being registered.
Canadian brands had complained that they were being charged "exorbitant" prices to register their trademarks as premium names. FTC chair Edith Ramirez has written to ICANN to say the agency will take action against the .sucks owner if "we have reason to believe an entity has engaged in deceptive or unfair practices in violation of Section 5 of the FTC Act". The Register reported that intellectual property lawyers are infuriated that "the dot-sucks registry was charging trademark holders $2,500 for .sucks domains and everyone else $10."
U.S. Representative Bob Goodlatte has said that trademark holders are "being shaken down" by the registry's fees. Jay Rockefeller says that .sucks is "a predatory shakedown scheme" and "Approving '.sucks', a gTLD with little or no public interest value, will have the effect of undermining the credibility ICANN has slowly been building with skeptical stakeholders."
In a long-running dispute, ICANN has so far declined to allow a Turkish company to purchase the .islam and .halal gTLDs, after the Organisation of Islamic Cooperation objected that the gTLDs should be administered by an organization that represents all the world's 1.6 billion Muslims. After a number of attempts to resolve the issue the domains are still held "on hold".
In April 2019, ICANN proposed an end to the price cap of org domains and effectively removed it in July in spite of having received 3,252 opposing comments and only six in favor. A few months later, the owner of the domain, the Public Interest Registry, proposed to sell the domain to investment firm Ethos Capital.
In May 2019, ICANN decided in favor of granting exclusive administration rights to amazon.com for the .amazon gTLD after a 7 year long dispute with the Amazon Cooperation Treaty Organization (ACTO). | [
{
"paragraph_id": 0,
"text": "The Internet Corporation for Assigned Names and Numbers (ICANN /ˈaɪkæn/ EYE-kan) is an American multistakeholder group and nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces and numerical spaces of the Internet, ensuring the network's stable and secure operation. ICANN performs the actual technical maintenance work of the Central Internet Address pools and DNS root zone registries pursuant to the Internet Assigned Numbers Authority (IANA) function contract. The contract regarding the IANA stewardship functions between ICANN and the National Telecommunications and Information Administration (NTIA) of the United States Department of Commerce ended on October 1, 2016, formally transitioning the functions to the global multistakeholder community.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Much of its work has concerned the Internet's global Domain Name System (DNS), including policy development for internationalization of the DNS, introduction of new generic top-level domains (TLDs), and the operation of root name servers. The numbering facilities ICANN manages include the Internet Protocol address spaces for IPv4 and IPv6, and assignment of address blocks to regional Internet registries. ICANN also maintains registries of Internet Protocol identifiers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "ICANN's primary principles of operation have been described as helping preserve the operational stability of the Internet; to promote competition; to achieve broad representation of the global Internet community; and to develop policies appropriate to its mission through bottom-up, consensus-based processes. The organization has often included a motto of \"One World. One Internet.\" on annual reports beginning in 2010, on less formal publications, as well as their official website.",
"title": ""
},
{
"paragraph_id": 3,
"text": "ICANN was officially incorporated in the state of California on September 30, 1998. Originally headquartered in Marina del Rey in the same building as the University of Southern California's Information Sciences Institute (ISI), its offices are now in the Playa Vista neighborhood of Los Angeles.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Before the establishment of ICANN, the IANA function of administering registries of Internet protocol identifiers (including the distributing top-level domains and IP addresses) was performed by Jon Postel, a computer science researcher who had been involved in the creation of ARPANET, first at UCLA and then at USC-ISI. In 1997 Postel testified before Congress that this had come about as a \"side task\" to this research work. The Information Sciences Institute was funded by the U.S. Department of Defense, as was SRI International's Network Information Center, which also performed some assigned name functions.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "As the Internet grew and expanded globally, the U.S. Department of Commerce initiated a process to establish a new organization to perform the IANA functions. On January 30, 1998, the National Telecommunications and Information Administration (NTIA), an agency of the U.S. Department of Commerce, issued for comment, \"A Proposal to Improve the Technical Management of Internet Names and Addresses.\" The proposed rule making, or \"Green Paper\", was published in the Federal Register on February 20, 1998, providing opportunity for public comment. NTIA received more than 650 comments as of March 23, 1998, when the comment period closed.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Green Paper proposed certain actions designed to privatize the management of Internet names and addresses in a manner that allows for the development of competition and facilitates global participation in Internet management. The Green Paper proposed for discussion a variety of issues relating to DNS management including private sector creation of a new not-for-profit corporation (the \"new corporation\") managed by a globally and functionally representative board of directors. ICANN was formed in response to this policy. ICANN managed the Internet Assigned Numbers Authority (IANA) under contract to the United States Department of Commerce (DOC) and pursuant to an agreement with the IETF.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "ICANN was incorporated in California on September 30, 1998, with entrepreneur and philanthropist Esther Dyson as founding chairwoman. It is a nonprofit public benefit corporation \"organized under the California Nonprofit Public Benefit Corporation Law for charitable and public purposes.\" ICANN was established in California due to the presence of Jon Postel, who was a founder of ICANN and was set to be its first Chief Technology Officer prior to his unexpected death. ICANN formerly operated from the same Marina del Rey building where Postel formerly worked, which is home to an office of the Information Sciences Institute at the University of Southern California. However, ICANN's headquarters is now located in the nearby Playa Vista neighborhood of Los Angeles.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Per its original by-laws, primary responsibility for policy formation in ICANN was to be delegated to three supporting organizations (Address Supporting Organization, Domain Name Supporting Organization, and Protocol Supporting Organization), each of which was to develop and recommend substantive policies and procedures for the management of the identifiers within their respective scope. They were also required to be financially independent from ICANN. As expected, the regional Internet registries and the IETF agreed to serve as the Address Supporting Organization and Protocol Supporting Organization respectively, and ICANN issued a call for interested parties to propose the structure and composition of the Domain Name Supporting Organization. In March 1999, the ICANN Board, based in part on the DNSO proposals received, decided instead on an alternate construction for the DNSO which delineated specific constituencies bodies within ICANN itself, thus adding primary responsibility for DNS policy development to ICANN's existing duties of oversight and coordination.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "On July 26, 2006, the United States government renewed the contract with ICANN for performance of the IANA function for an additional one to five years. The context of ICANN's relationship with the U.S. government was clarified on September 29, 2006, when ICANN signed a new memorandum of understanding with the United States Department of Commerce (DOC). This document gave the DOC oversight over some of the ICANN operations.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In July 2008, the DOC reiterated an earlier statement that it has \"no plans to transition management of the authoritative root zone file to ICANN\". The letter also stresses the separate roles of the IANA and VeriSign.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "On September 30, 2009, ICANN signed an agreement with the DOC (known as the \"Affirmation of Commitments\") that confirmed ICANN's commitment to a multistakeholder governance model, but did not remove it from DOC oversight and control. The Affirmation of Commitments, which aimed to create international oversight, ran into criticism.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "On March 10, 2016, ICANN and the DOC signed a historic, culminating agreement to finally remove ICANN and IANA from the control and oversight of the DOC. On October 1, 2016, ICANN was freed from U.S. government oversight.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Since its creation, ICANN has been the subject of criticism and controversy. In 2000, professor Michael Froomkin of the University of Miami School of Law argued that ICANN's relationship with the U.S. Department of Commerce is illegal, in violation of either the Constitution or federal statutes.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On March 18, 2002, publicly elected At-Large Representative for North America board member Karl Auerbach sued ICANN in Superior Court in California to gain access to ICANN's accounting records without restriction. Auerbach won.",
"title": "Notable events"
},
{
"paragraph_id": 15,
"text": "During September and October 2003, ICANN played a crucial role in the conflict over VeriSign's \"wild card\" DNS service Site Finder. After an open letter from ICANN issuing an ultimatum to VeriSign, later endorsed by the Internet Architecture Board, the company voluntarily ended the service on October 4, 2003. After this action, VeriSign filed a lawsuit against ICANN on February 27, 2004, claiming that ICANN had exceeded its authority. By this lawsuit, VeriSign sought to reduce ambiguity about ICANN's authority. The antitrust component of VeriSign's claim was dismissed during August 2004. VeriSign's challenge that ICANN overstepped its contractual rights is currently outstanding. A proposed settlement already approved by ICANN's board would resolve VeriSign's challenge to ICANN in exchange for the right to increase pricing on .com domains. At the meeting of ICANN in Rome, which took place from March 2 to 6, 2004, ICANN agreed to ask approval of the U.S. Department of Commerce for the Waiting List Service of VeriSign.",
"title": "Notable events"
},
{
"paragraph_id": 16,
"text": "On May 17, 2004, ICANN published a proposed budget for the year 2004–05. It included proposals to increase the openness and professionalism of its operations, and greatly increased its proposed spending from US$8.27 million to $15.83 million. The increase was to be funded by the introduction of new top-level domains, charges to domain registries, and a fee for some domain name registrations, renewals and transfers (initially US$0.20 for all domains within a country-code top-level domain, and US$0.25 for all others). The Council of European National Top Level Domain Registries (CENTR), which represents the Internet registries of 39 countries, rejected the increase, accusing ICANN of a lack of financial prudence and criticizing what it describes as ICANN's \"unrealistic political and operational targets\". Despite the criticism, the registry agreement for the top-level domains jobs and travel includes a US$2 fee on every domain the licensed companies sell or renew.",
"title": "Notable events"
},
{
"paragraph_id": 17,
"text": "After a second round of negotiations during 2004, the TLDs eu, asia, travel, jobs, mobi, and cat were introduced during 2005.",
"title": "Notable events"
},
{
"paragraph_id": 18,
"text": "On February 28, 2006, ICANN's board approved a settlement with VeriSign in the lawsuit resulting from SiteFinder that involved allowing VeriSign (the registry) to raise its registration fees by up to 7% a year. This was criticised by a few members of the U.S. House of Representatives' Small Business Committee.",
"title": "Notable events"
},
{
"paragraph_id": 19,
"text": "During February 2007, ICANN began procedures to end accreditation of one of their registrars, RegisterFly amid charges and lawsuits involving fraud, and criticism of ICANN's management of the situation. ICANN has been the subject of criticism as a result of its handling of RegisterFly, and the harm caused to thousands of clients as a result of what has been termed ICANN's \"laissez faire attitude toward customer allegations of fraud\".",
"title": "Notable events"
},
{
"paragraph_id": 20,
"text": "On May 23, 2008, ICANN issued enforcement notices against ten accredited registrars and announced this through a press release entitled \"'Worst Spam Offenders' Notified by ICANN, Compliance system working to correct Whois and other issues.\" This was largely in response to a report issued by KnujOn, called \"The 10 Worst Registrars\" in terms of spam advertised junk product sites and compliance failure. The mention of the word \"spam\" in the title of the ICANN memo is somewhat misleading since ICANN does not address issues of spam or email abuse. Website content and usage are not within ICANN's mandate. However, the KnujOn report details how various registrars have not complied with their contractual obligations under the Registrar Accreditation Agreement (RAA). The main point of the KnujOn research was to demonstrate the relationships between compliance failure, illicit product traffic, and spam. The report demonstrated that out of 900 ICANN accredited registrars, fewer than 20 held 90% of the web domains advertised in spam. These same registrars were also most frequently cited by KnujOn as failing to resolve complaints made through the Whois Data Problem Reporting System (WDPRS).",
"title": "Notable events"
},
{
"paragraph_id": 21,
"text": "On June 26, 2008, the ICANN Board started a new process of TLD naming policy to take a \"significant step forward on the introduction of new generic top-level domains.\" This program envisioned the availability of many new or already proposed domains, as well a new application and implementation process.",
"title": "Notable events"
},
{
"paragraph_id": 22,
"text": "On October 1, 2008, ICANN issued breach notices against Joker and Beijing Innovative Linkage Technology Ltd. after further researching reports and complaints issued by KnujOn. These notices gave the registrars 15 days to fix their Whois investigation efforts.",
"title": "Notable events"
},
{
"paragraph_id": 23,
"text": "In 2010, ICANN approved a major review of its policies with respect to accountability, transparency, and public participation by the Berkman Center for Internet and Society at Harvard University. This external review was an assistance of the work of ICANN's Accountability and Transparency Review team.",
"title": "Notable events"
},
{
"paragraph_id": 24,
"text": "On February 3, 2011, ICANN announced that it had distributed the last batch of its remaining IPv4 addresses to the world's five regional Internet registries, the organizations that manage IP addresses in different regions. These registries began assigning the final IPv4 addresses within their regions until they ran out completely.",
"title": "Notable events"
},
{
"paragraph_id": 25,
"text": "On June 20, 2011, the ICANN board voted to end most restrictions on the names of generic top-level domains (gTLD). Companies and organizations became able to choose essentially arbitrary top-level Internet domain names. The use of non-Latin characters (such as Cyrillic, Arabic, Chinese, etc.) is also allowed in gTLDs. ICANN began accepting applications for new gTLDS on January 12, 2012. The initial price to apply for a new gTLD was set at $185,000 and the annual renewal fee is $25,000.",
"title": "Notable events"
},
{
"paragraph_id": 26,
"text": "During December 2011, the Federal Trade Commission stated ICANN had long failed to provide safeguards that protect consumers from online swindlers.",
"title": "Notable events"
},
{
"paragraph_id": 27,
"text": "Following the 2013 NSA spying scandal, ICANN endorsed the Montevideo Statement, although no direct connection between these could be proven.",
"title": "Notable events"
},
{
"paragraph_id": 28,
"text": "On October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA) and entered the private sector.",
"title": "Notable events"
},
{
"paragraph_id": 29,
"text": "The European Union's General Data Protection Regulation (active since May 25, 2018) impacted on ICANN operations, which the latter tried to fix through last-minute changes.",
"title": "Notable events"
},
{
"paragraph_id": 30,
"text": "From its founding to the present, ICANN has been formally organized as a nonprofit corporation \"for charitable and public purposes\" under the California Nonprofit Public Benefit Corporation Law. It is managed by a 16-member board of directors composed of eight members selected by a nominating committee on which all the constituencies of ICANN are represented; six representatives of its Supporting Organizations, sub-groups that deal with specific sections of the policies under ICANN's purview; an at-large seat filled by an at-large organization; and the president / CEO, appointed by the board.",
"title": "Structure"
},
{
"paragraph_id": 31,
"text": "There are currently three supporting organizations: the Generic Names Supporting Organization (GNSO) deals with policy making on generic top-level domains (gTLDs); the Country Code Names Supporting Organization (ccNSO) deals with policy making on country-code top-level domains (ccTLDs); the Address Supporting Organization (ASO) deals with policy making on IP addresses.",
"title": "Structure"
},
{
"paragraph_id": 32,
"text": "ICANN also relies on some advisory committees and other advisory mechanisms to receive advice on the interests and needs of stakeholders that do not directly participate in the Supporting Organizations. These include the Governmental Advisory Committee (GAC), which is composed of representatives of a large number of national governments from all over the world; the At-Large Advisory Committee (ALAC), which is composed of individual Internet users from around the world selected by each of the Regional At-Large Organizations (RALO) and Nominating Committee; the Root Server System Advisory Committee, which provides advice on the operation of the DNS root server system; the Security and Stability Advisory Committee (SSAC), which is composed of Internet experts who study security issues pertaining to ICANN's mandate; and the Technical Liaison Group (TLG), which is composed of representatives of other international technical organizations that focus, at least in part, on the Internet.",
"title": "Structure"
},
{
"paragraph_id": 33,
"text": "The Governmental Advisory Committee has representatives from 179 states and 38 Observer organizations, including the Holy See, Cook Islands, Niue, Taiwan, Hong Kong, Bermuda, Montserrat, the European Commission and the African Union Commission.",
"title": "Structure"
},
{
"paragraph_id": 34,
"text": "In addition the following organizations are GAC Observers:",
"title": "Structure"
},
{
"paragraph_id": 35,
"text": "As the operator of the IANA domain name functions, ICANN is responsible for the DNSSEC management of the root zone. While day-to-day operations are managed by ICANN and Verisign, the trust is rooted in a group of Trusted Community Representatives. The members of this group must not be affiliated with ICANN, but are instead members of the broader DNS community, volunteering to become a Trusted Community Representative. The role of the representatives are primarily to take part in regular key ceremonies at a physical location, organized by ICANN, and to safeguard the key materials in between.",
"title": "Structure"
},
{
"paragraph_id": 36,
"text": "In the Memorandum of understanding that set up the relationship between ICANN and the U.S. government, ICANN was given a mandate requiring that it operate \"in a bottom up, consensus driven, democratic manner.\" However, the attempts that ICANN have made to establish an organizational structure that would allow wide input from the global Internet community did not produce results amenable to the current Board. As a result, the At-Large constituency and direct election of board members by the global Internet community were soon abandoned.",
"title": "Structure"
},
{
"paragraph_id": 37,
"text": "ICANN holds periodic public meetings rotated between continents for the purpose of encouraging global participation in its processes. Resolutions of the ICANN Board, preliminary reports, and minutes of the meetings, are published on the ICANN website, sometimes in real time. However, there are criticisms from ICANN constituencies including the Noncommercial Users Constituency (NCUC) and the At-Large Advisory Committee (ALAC) that there is not enough public disclosure and that too many discussions and decisions take place out of sight of the public.",
"title": "Structure"
},
{
"paragraph_id": 38,
"text": "During the early 2000s, there had been speculation that the United Nations might assume control of ICANN, followed by a negative reaction from the U.S. government and worries about a division of the Internet. The World Summit on the Information Society in Tunisia during November 2005 agreed not to get involved in the day-to-day and technical operations of ICANN. However it also agreed to establish an international Internet Governance Forum, with a consultative role on the future governance of the Internet. ICANN's Government Advisory Committee is currently established to provide advice to ICANN regarding public policy issues and has participation by many of the world's governments.",
"title": "Structure"
},
{
"paragraph_id": 39,
"text": "Some have attempted to argue that ICANN was never given the authority to decide policy, e.g., choose new TLDs or exclude other interested parties who refuse to pay ICANN's US$185,000 fee, but was to be a technical caretaker. Critics suggest that ICANN should not be allowed to impose business rules on market participants, and that all TLDs should be added on a first-come, first-served basis and the market should be the arbiter of who succeeds and who does not.",
"title": "Structure"
},
{
"paragraph_id": 40,
"text": "One task that ICANN was asked to do was to address the issue of domain name ownership resolution for generic top-level domains (gTLDs). ICANN's attempt at such a policy was drafted in close cooperation with the World Intellectual Property Organization (WIPO), and the result has now become known as the Uniform Dispute Resolution Policy (UDRP). This policy essentially attempts to provide a mechanism for rapid, cheap and reasonable resolution of domain name conflicts, avoiding the traditional court system for disputes by allowing cases to be brought to one of a set of bodies that arbitrate domain name disputes. According to ICANN policy, domain registrants must agree to be bound by the UDRP—they cannot get a domain name without agreeing to this.",
"title": "Activities"
},
{
"paragraph_id": 41,
"text": "Examination of the UDRP decision patterns has caused some to conclude that compulsory domain name arbitration is less likely to give a fair hearing to domain name owners asserting defenses under the First Amendment and other laws, compared to the federal courts of appeal in particular.",
"title": "Activities"
},
{
"paragraph_id": 42,
"text": "In 2013, the initial report of ICANN's Expert Working Group has recommended that the present form of Whois, a utility that allows anyone to know who has registered a domain name on the Internet, should be \"abandoned\". It recommends it be replaced with a system that keeps most registration information secret (or \"gated\") from most Internet users, and only discloses information for \"permissible purposes\". ICANN's list of permissible purposes includes domain name research, domain name sale and purchase, regulatory enforcement, personal data protection, legal actions, and abuse mitigation. Whois has been a key tool of investigative journalists interested in determining who was disseminating information on the Internet. The use of whois by journalists is not included in the list of permissible purposes in the initial report.",
"title": "Activities"
},
{
"paragraph_id": 43,
"text": "Proposals have been made to internationalize ICANN's monitoring responsibilities (currently the responsibility of the US), to transform it into an international organization (under international law), and to \"establish an intergovernmental mechanism enabling governments, on an equal footing, to carry out their role and responsibilities in international public policy issues pertaining to the Internet\".",
"title": "Proposals for reform"
},
{
"paragraph_id": 44,
"text": "One controversial proposal, resulting from a September 2011 summit between India, Brazil, and South Africa (IBSA), would seek to move Internet governance into a \"UN Committee on Internet-Related Policy\" (UN-CIRP). The action was a reaction to a perception that the principles of the 2005 Tunis Agenda for the Information Society have not been met. The statement proposed the creation of a new political organization operating as a component of the United Nations to provide policy recommendations for the consideration of technical organizations such as ICANN and international bodies such as the ITU. Subsequent to public criticisms, the Indian government backed away from the proposal.",
"title": "Proposals for reform"
},
{
"paragraph_id": 45,
"text": "On October 7, 2013, the Montevideo Statement on the Future of Internet Cooperation was released by the managers of a number of organizations involved in coordinating the Internet's global technical infrastructure, loosely known as the \"I*\" (or \"I-star\") group. Among other things, the statement \"expressed strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations of pervasive monitoring and surveillance\" and \"called for accelerating the globalization of ICANN and IANA functions, towards an environment in which all stakeholders, including all governments, participate on an equal footing\". This desire to reduce United States association with the internet is considered a reaction to the ongoing NSA surveillance scandal. The statement was signed by the managers of the Internet Corporation for Assigned Names and Numbers (ICANN), the Internet Engineering Task Force, the Internet Architecture Board, the World Wide Web Consortium, the Internet Society, and the five regional Internet address registries (African Network Information Center, American Registry for Internet Numbers, Asia-Pacific Network Information Centre, Latin America and Caribbean Internet Addresses Registry, and Réseaux IP Européens Network Coordination Centre).",
"title": "Proposals for reform"
},
{
"paragraph_id": 46,
"text": "During October 2013, Fadi Chehadé, former president and CEO of ICANN, met with Brazilian president Dilma Rousseff in Brasilia. Upon Chehadé's invitation, the two announced that Brazil would host an international summit on Internet governance during April 2014. The announcement came after the 2013 disclosures of mass surveillance by the U.S. government, and Rousseff's speech at the opening session of the 2013 United Nations General Assembly, where she strongly criticized the American surveillance program as a \"breach of international law\". The \"Global Multistakeholder Meeting on the Future of Internet Governance (NET mundial)\" will include representatives of government, industry, civil society, and academia. At the IGF VIII meeting in Bali in October 2013 a commenter noted that Brazil intends the meeting to be a \"summit\" in the sense that it will be high level with decision-making authority. The organizers of the \"NET mundial\" meeting have decided that an online forum called \"/1net\", set up by the I* group, will be a major conduit of non-governmental input into the three committees preparing for the meeting in April.",
"title": "Proposals for reform"
},
{
"paragraph_id": 47,
"text": "The Obama administration that had joined critics of ICANN during 2011 announced in March 2014 that they intended to transition away from oversight of the IANA functions contract. The current contract that the United States Department of Commerce has with ICANN expired in 2015, in its place the NTIA will transition oversight of the IANA functions to the 'global multistakeholder community'.",
"title": "Proposals for reform"
},
{
"paragraph_id": 48,
"text": "The NetMundial Initiative is a plan for international governance of the Internet that was first proposed at the Global Multistakeholder Meeting on the Future of Internet Governance (GMMFIG) conference (April 23–24, 2014) and later developed into the NetMundial Initiative by ICANN CEO Fadi Chehadé along with representatives of the World Economic Forum (WEF) and the Brazilian Internet Steering Committee (Comitê Gestor da Internet no Brasil), commonly referred to as \"CGI.br\".",
"title": "Proposals for reform"
},
{
"paragraph_id": 49,
"text": "The meeting produced a nonbinding statement in favor of consensus-based decision-making. It represented a compromise and did not harshly condemn mass surveillance or support net neutrality, despite initial endorsement for that from Brazil. The final resolution says ICANN should be controlled internationally by September 2015. A minority of governments, including Russia, China, Iran and India, were unhappy with the final resolution and wanted multilateral management for the Internet, rather than broader multistakeholder management.",
"title": "Proposals for reform"
},
{
"paragraph_id": 50,
"text": "A month later, the Panel on Global Internet Cooperation and Governance Mechanisms (convened by the Internet Corporation for Assigned Names and Numbers (ICANN) and the World Economic Forum (WEF) with assistance from The Annenberg Foundation), endorsed and included the NetMundial statement in its own report.",
"title": "Proposals for reform"
},
{
"paragraph_id": 51,
"text": "During June 2014, France strongly attacked ICANN, saying ICANN is not a fit venue for Internet governance and that alternatives should be sought.",
"title": "Proposals for reform"
},
{
"paragraph_id": 52,
"text": "During 2011, seventy-nine companies, including The Coca-Cola Company, Hewlett-Packard, Samsung and others, signed a petition against ICANN's new TLD program (sometimes referred to as a \"commercial landgrab\"), in a group organized by the Association of National Advertisers. As of September 2014, this group, the Coalition for Responsible Internet Domain Oversight, that opposes the rollout of ICANN's TLD expansion program, has been joined by 102 associations and 79 major companies. Partly as a response to this criticism, ICANN initiated an effort to protect trademarks in domain name registrations, which eventually culminated in the establishment of the Trademark Clearinghouse.",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 53,
"text": "ICANN has received more than $60 million from gTLD auctions, and has accepted the controversial domain name \".sucks\" (referring to the primarily US slang for being inferior or objectionable). sucks domains are owned and controlled by the Vox Populi Registry which won the rights for .sucks gTLD in November 2014.",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 54,
"text": "The .sucks domain registrar has been described as \"predatory, exploitive and coercive\" by the Intellectual Property Constituency that advises the ICANN board. When the .sucks registry announced their pricing model, \"most brand owners were upset and felt like they were being penalized by having to pay more to protect their brands.\" Because of the low utility of the \".sucks\" domain, most fees come from \"Brand Protection\" customers registering their trademarks to prevent domains being registered.",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 55,
"text": "Canadian brands had complained that they were being charged \"exorbitant\" prices to register their trademarks as premium names. FTC chair Edith Ramirez has written to ICANN to say the agency will take action against the .sucks owner if \"we have reason to believe an entity has engaged in deceptive or unfair practices in violation of Section 5 of the FTC Act\". The Register reported that intellectual property lawyers are infuriated that \"the dot-sucks registry was charging trademark holders $2,500 for .sucks domains and everyone else $10.\"",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 56,
"text": "U.S. Representative Bob Goodlatte has said that trademark holders are \"being shaken down\" by the registry's fees. Jay Rockefeller says that .sucks is \"a predatory shakedown scheme\" and \"Approving '.sucks', a gTLD with little or no public interest value, will have the effect of undermining the credibility ICANN has slowly been building with skeptical stakeholders.\"",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 57,
"text": "In a long-running dispute, ICANN has so far declined to allow a Turkish company to purchase the .islam and .halal gTLDs, after the Organisation of Islamic Cooperation objected that the gTLDs should be administered by an organization that represents all the world's 1.6 billion Muslims. After a number of attempts to resolve the issue the domains are still held \"on hold\".",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 58,
"text": "In April 2019, ICANN proposed an end to the price cap of org domains and effectively removed it in July in spite of having received 3,252 opposing comments and only six in favor. A few months later, the owner of the domain, the Public Interest Registry, proposed to sell the domain to investment firm Ethos Capital.",
"title": "TLD expansion and concerns about specific top-level domains"
},
{
"paragraph_id": 59,
"text": "In May 2019, ICANN decided in favor of granting exclusive administration rights to amazon.com for the .amazon gTLD after a 7 year long dispute with the Amazon Cooperation Treaty Organization (ACTO).",
"title": "TLD expansion and concerns about specific top-level domains"
}
]
| The Internet Corporation for Assigned Names and Numbers is an American multistakeholder group and nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces and numerical spaces of the Internet, ensuring the network's stable and secure operation. ICANN performs the actual technical maintenance work of the Central Internet Address pools and DNS root zone registries pursuant to the Internet Assigned Numbers Authority (IANA) function contract. The contract regarding the IANA stewardship functions between ICANN and the National Telecommunications and Information Administration (NTIA) of the United States Department of Commerce ended on October 1, 2016, formally transitioning the functions to the global multistakeholder community. Much of its work has concerned the Internet's global Domain Name System (DNS), including policy development for internationalization of the DNS, introduction of new generic top-level domains (TLDs), and the operation of root name servers. The numbering facilities ICANN manages include the Internet Protocol address spaces for IPv4 and IPv6, and assignment of address blocks to regional Internet registries. ICANN also maintains registries of Internet Protocol identifiers. ICANN's primary principles of operation have been described as helping preserve the operational stability of the Internet; to promote competition; to achieve broad representation of the global Internet community; and to develop policies appropriate to its mission through bottom-up, consensus-based processes. The organization has often included a motto of "One World. One Internet." on annual reports beginning in 2010, on less formal publications, as well as their official website. ICANN was officially incorporated in the state of California on September 30, 1998. Originally headquartered in Marina del Rey in the same building as the University of Southern California's Information Sciences Institute (ISI), its offices are now in the Playa Vista neighborhood of Los Angeles. | 2001-09-22T07:45:59Z | 2023-11-01T09:52:30Z | [
"Template:Infobox organization",
"Template:Clarify",
"Template:Cite ietf",
"Template:Cite magazine",
"Template:Distinguish",
"Template:Cn",
"Template:Commons category",
"Template:Use American English",
"Template:Cite web",
"Template:Internet",
"Template:Who",
"Template:Reflist",
"Template:Cite thesis",
"Template:Respell",
"Template:Div col",
"Template:Div col end",
"Template:Cite journal",
"Template:Citation needed",
"Template:Cbignore",
"Template:ISBN",
"Template:Official website",
"Template:Authority control",
"Template:IPAc-en",
"Template:Use mdy dates",
"Template:Cite book",
"Template:Cite news",
"Template:Short description",
"Template:Webarchive",
"Template:Clear",
"Template:ICANN structure",
"Template:Mono"
]
| https://en.wikipedia.org/wiki/ICANN |
15,237 | Iterative method | In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones.
A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.
In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } by Gaussian elimination). Iterative methods are often the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.
If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x. Here xn is the nth approximation or iteration of x and xn+1 is the next or n + 1 iteration of x. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example, x = f(x).) If the function f is continuously differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.
In the case of a system of linear equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods.
Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices.
An iterative method is defined by
and for a given linear system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } with exact solution x ∗ {\displaystyle \mathbf {x} ^{*}} the error by
An iterative method is called linear if there exists a matrix C ∈ R n × n {\displaystyle C\in \mathbb {R} ^{n\times n}} such that
and this matrix is called the iteration matrix. An iterative method with a given iteration matrix C {\displaystyle C} is called convergent if the following holds
An important theorem states that for a given iterative method and its iteration matrix C {\displaystyle C} it is convergent if and only if its spectral radius ρ ( C ) {\displaystyle \rho (C)} is smaller than unity, that is,
The basic iterative methods work by splitting the matrix A {\displaystyle A} into
and here the matrix M {\displaystyle M} should be easily invertible. The iterative methods are now defined as
From this follows that the iteration matrix is given by
Basic examples of stationary iterative methods use a splitting of the matrix A {\displaystyle A} such as
where D {\displaystyle D} is only the diagonal part of A {\displaystyle A} , and L {\displaystyle L} is the strict lower triangular part of A {\displaystyle A} . Respectively, U {\displaystyle U} is the strict upper triangular part of A {\displaystyle A} .
Linear stationary iterative methods are also called relaxation methods.
Krylov subspace methods work by forming a basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is the conjugate gradient method (CG) which assumes that the system matrix A {\displaystyle A} is symmetric positive-definite. For symmetric (and possibly indefinite) A {\displaystyle A} one works with the minimal residual method (MINRES). In the case of non-symmetric matrices, methods such as the generalized minimal residual method (GMRES) and the biconjugate gradient method (BiCG) have been derived.
Since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator.
The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area.
Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° and π in The Treatise of Chord and Sine to high precision. An early iterative method for solving a linear system appeared in a letter of Gauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest .
The theory of stationary iterative methods was solidly established with the work of D.M. Young starting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type. | [
{
"paragraph_id": 0,
"text": "In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations A x = b {\\displaystyle A\\mathbf {x} =\\mathbf {b} } by Gaussian elimination). Iterative methods are often the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.",
"title": ""
},
{
"paragraph_id": 3,
"text": "If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x. Here xn is the nth approximation or iteration of x and xn+1 is the next or n + 1 iteration of x. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example, x = f(x).) If the function f is continuously differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.",
"title": "Attractive fixed points"
},
{
"paragraph_id": 4,
"text": "In the case of a system of linear equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods.",
"title": "Linear systems"
},
{
"paragraph_id": 5,
"text": "Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a \"correction equation\" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices.",
"title": "Linear systems"
},
{
"paragraph_id": 6,
"text": "An iterative method is defined by",
"title": "Linear systems"
},
{
"paragraph_id": 7,
"text": "and for a given linear system A x = b {\\displaystyle A\\mathbf {x} =\\mathbf {b} } with exact solution x ∗ {\\displaystyle \\mathbf {x} ^{*}} the error by",
"title": "Linear systems"
},
{
"paragraph_id": 8,
"text": "An iterative method is called linear if there exists a matrix C ∈ R n × n {\\displaystyle C\\in \\mathbb {R} ^{n\\times n}} such that",
"title": "Linear systems"
},
{
"paragraph_id": 9,
"text": "and this matrix is called the iteration matrix. An iterative method with a given iteration matrix C {\\displaystyle C} is called convergent if the following holds",
"title": "Linear systems"
},
{
"paragraph_id": 10,
"text": "An important theorem states that for a given iterative method and its iteration matrix C {\\displaystyle C} it is convergent if and only if its spectral radius ρ ( C ) {\\displaystyle \\rho (C)} is smaller than unity, that is,",
"title": "Linear systems"
},
{
"paragraph_id": 11,
"text": "The basic iterative methods work by splitting the matrix A {\\displaystyle A} into",
"title": "Linear systems"
},
{
"paragraph_id": 12,
"text": "and here the matrix M {\\displaystyle M} should be easily invertible. The iterative methods are now defined as",
"title": "Linear systems"
},
{
"paragraph_id": 13,
"text": "From this follows that the iteration matrix is given by",
"title": "Linear systems"
},
{
"paragraph_id": 14,
"text": "Basic examples of stationary iterative methods use a splitting of the matrix A {\\displaystyle A} such as",
"title": "Linear systems"
},
{
"paragraph_id": 15,
"text": "where D {\\displaystyle D} is only the diagonal part of A {\\displaystyle A} , and L {\\displaystyle L} is the strict lower triangular part of A {\\displaystyle A} . Respectively, U {\\displaystyle U} is the strict upper triangular part of A {\\displaystyle A} .",
"title": "Linear systems"
},
{
"paragraph_id": 16,
"text": "Linear stationary iterative methods are also called relaxation methods.",
"title": "Linear systems"
},
{
"paragraph_id": 17,
"text": "Krylov subspace methods work by forming a basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is the conjugate gradient method (CG) which assumes that the system matrix A {\\displaystyle A} is symmetric positive-definite. For symmetric (and possibly indefinite) A {\\displaystyle A} one works with the minimal residual method (MINRES). In the case of non-symmetric matrices, methods such as the generalized minimal residual method (GMRES) and the biconjugate gradient method (BiCG) have been derived.",
"title": "Linear systems"
},
{
"paragraph_id": 18,
"text": "Since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator.",
"title": "Linear systems"
},
{
"paragraph_id": 19,
"text": "The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area.",
"title": "Linear systems"
},
{
"paragraph_id": 20,
"text": "Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° and π in The Treatise of Chord and Sine to high precision. An early iterative method for solving a linear system appeared in a letter of Gauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest .",
"title": "Linear systems"
},
{
"paragraph_id": 21,
"text": "The theory of stationary iterative methods was solidly established with the work of D.M. Young starting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type.",
"title": "Linear systems"
}
]
| In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common. In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution. Iterative methods are often the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving many variables, where direct methods would be prohibitively expensive even with the best available computing power. | 2001-11-11T19:00:32Z | 2023-12-21T02:56:58Z | [
"Template:Short description",
"Template:Citation needed",
"Template:Portal",
"Template:Authority control",
"Template:Pi",
"Template:Reflist",
"Template:Cite journal",
"Template:Commons category",
"Template:Optimization algorithms"
]
| https://en.wikipedia.org/wiki/Iterative_method |
15,238 | International judicial institution | International judicial institutions can be divided into courts, arbitral tribunals and quasi-judicial institutions. Courts are permanent bodies, with near the same composition for each case. Arbitral tribunals, by contrast, are constituted anew for each case. Both courts and arbitral tribunals can make binding decisions. Quasi-judicial institutions, by contrast, make rulings on cases, but these rulings are not in themselves legally binding; the main example is the individual complaints mechanisms available under the various UN human rights treaties.
Institutions can also be divided into global and regional institutions.
The listing below incorporates both currently existing institutions, defunct institutions that no longer exist, institutions which never came into existence due to non-ratification of their constitutive instruments, and institutions which do not yet exist, but for which constitutive instruments have been signed. It does not include mere proposed institutions for which no instrument was ever signed. | [
{
"paragraph_id": 0,
"text": "International judicial institutions can be divided into courts, arbitral tribunals and quasi-judicial institutions. Courts are permanent bodies, with near the same composition for each case. Arbitral tribunals, by contrast, are constituted anew for each case. Both courts and arbitral tribunals can make binding decisions. Quasi-judicial institutions, by contrast, make rulings on cases, but these rulings are not in themselves legally binding; the main example is the individual complaints mechanisms available under the various UN human rights treaties.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Institutions can also be divided into global and regional institutions.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The listing below incorporates both currently existing institutions, defunct institutions that no longer exist, institutions which never came into existence due to non-ratification of their constitutive instruments, and institutions which do not yet exist, but for which constitutive instruments have been signed. It does not include mere proposed institutions for which no instrument was ever signed.",
"title": ""
}
]
| International judicial institutions can be divided into courts, arbitral tribunals and quasi-judicial institutions. Courts are permanent bodies, with near the same composition for each case. Arbitral tribunals, by contrast, are constituted anew for each case. Both courts and arbitral tribunals can make binding decisions. Quasi-judicial institutions, by contrast, make rulings on cases, but these rulings are not in themselves legally binding; the main example is the individual complaints mechanisms available under the various UN human rights treaties. Institutions can also be divided into global and regional institutions. The listing below incorporates both currently existing institutions, defunct institutions that no longer exist, institutions which never came into existence due to non-ratification of their constitutive instruments, and institutions which do not yet exist, but for which constitutive instruments have been signed. It does not include mere proposed institutions for which no instrument was ever signed. | 2022-09-30T18:19:09Z | [
"Template:Cite book",
"Template:Short description",
"Template:More citations",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/International_judicial_institution |
|
15,239 | International Prize Court | The International Prize Court was an international court proposed at the beginning of the 20th century, to hear prize cases. An international agreement to create it, the Convention Relative to the Creation of an International Prize Court, was made at the Second Hague Conference in 1907 but never came into force.
The capturing of prizes (enemy equipment, vehicles, and especially ships) during wartime is a tradition that goes back as far as organized warfare itself. The International Prize Court was to hear appeals from national courts concerning prize cases. Even as a draft, the convention was innovative for the time, in being both the first ever treaty for a truly international court (as opposed to a mere arbitral tribunal), and in providing individuals with access to the court, going against the prevailing doctrines of international law at the time, according to which only states had rights and duties under international law. The convention was opposed, particularly by elements within the United States and the United Kingdom, as a violation of national sovereignty.
The 1907 convention was modified by the Additional Protocol to the Convention Relative to the Creation of an International Prize Court, done at the Hague on October 18, 1910. The protocol was an attempt to resolve some concerns expressed by the United States at the court, which felt it to be in violation of its constitutional provision that provides for the U.S. Supreme Court being the final judicial authority. However, neither the convention nor the subsequent protocol ever entered into force, since only Nicaragua ratified the agreements. As a result, the court never came into existence.
A number of ideas from the International Prize Court proposal can be seen in present-day international courts, such as its provision for judges ad hoc, later adopted in the Permanent Court of International Justice and the subsequent International Court of Justice.
Primary:
Secondary: | [
{
"paragraph_id": 0,
"text": "The International Prize Court was an international court proposed at the beginning of the 20th century, to hear prize cases. An international agreement to create it, the Convention Relative to the Creation of an International Prize Court, was made at the Second Hague Conference in 1907 but never came into force.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The capturing of prizes (enemy equipment, vehicles, and especially ships) during wartime is a tradition that goes back as far as organized warfare itself. The International Prize Court was to hear appeals from national courts concerning prize cases. Even as a draft, the convention was innovative for the time, in being both the first ever treaty for a truly international court (as opposed to a mere arbitral tribunal), and in providing individuals with access to the court, going against the prevailing doctrines of international law at the time, according to which only states had rights and duties under international law. The convention was opposed, particularly by elements within the United States and the United Kingdom, as a violation of national sovereignty.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The 1907 convention was modified by the Additional Protocol to the Convention Relative to the Creation of an International Prize Court, done at the Hague on October 18, 1910. The protocol was an attempt to resolve some concerns expressed by the United States at the court, which felt it to be in violation of its constitutional provision that provides for the U.S. Supreme Court being the final judicial authority. However, neither the convention nor the subsequent protocol ever entered into force, since only Nicaragua ratified the agreements. As a result, the court never came into existence.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A number of ideas from the International Prize Court proposal can be seen in present-day international courts, such as its provision for judges ad hoc, later adopted in the Permanent Court of International Justice and the subsequent International Court of Justice.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Primary:",
"title": "References"
},
{
"paragraph_id": 5,
"text": "Secondary:",
"title": "References"
}
]
| The International Prize Court was an international court proposed at the beginning of the 20th century, to hear prize cases. An international agreement to create it, the Convention Relative to the Creation of an International Prize Court, was made at the Second Hague Conference in 1907 but never came into force. The capturing of prizes during wartime is a tradition that goes back as far as organized warfare itself. The International Prize Court was to hear appeals from national courts concerning prize cases. Even as a draft, the convention was innovative for the time, in being both the first ever treaty for a truly international court, and in providing individuals with access to the court, going against the prevailing doctrines of international law at the time, according to which only states had rights and duties under international law. The convention was opposed, particularly by elements within the United States and the United Kingdom, as a violation of national sovereignty. The 1907 convention was modified by the Additional Protocol to the Convention Relative to the Creation of an International Prize Court, done at the Hague on October 18, 1910. The protocol was an attempt to resolve some concerns expressed by the United States at the court, which felt it to be in violation of its constitutional provision that provides for the U.S. Supreme Court being the final judicial authority. However, neither the convention nor the subsequent protocol ever entered into force, since only Nicaragua ratified the agreements. As a result, the court never came into existence. A number of ideas from the International Prize Court proposal can be seen in present-day international courts, such as its provision for judges ad hoc, later adopted in the Permanent Court of International Justice and the subsequent International Court of Justice. | 2001-11-12T08:56:58Z | 2023-08-20T23:17:08Z | [
"Template:Cite journal",
"Template:Short description",
"Template:Cite web",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/International_Prize_Court |
15,240 | Imam | Imam (/ɪˈmɑːm/; Arabic: إمام imām; plural: أئمة aʼimmah) is an Islamic leadership position. For Sunni Muslims, Imam is most commonly used as the title of a prayer leader of a mosque. In this context, imams may lead Islamic prayers, serve as community leaders, and provide religious guidance. Thus for Sunnis, anyone can study the basic Islamic sciences and become an Imam.
For most Shia Muslims, the Imams are absolute infallible leaders of the Islamic community after the Prophet. Shias consider the term to be only applicable to the members and descendants of the Ahl al-Bayt, the family of the Islamic prophet Muhammad. In Twelver Shīʿīsm there are 14 infallibles, 12 of which are Imams, the final being Imam Mahdi who will return at the end of times. The title was also used by the Zaidi Shia Imams of Yemen, who eventually founded the Mutawakkilite Kingdom of Yemen (1918–1970).
Sunni Islam does not have imams in the same sense as the Shi'a, an important distinction often overlooked by those outside of the Islamic religion. In everyday terms, an imam for Sunni Muslims is the one who leads Islamic formal (Fard) prayers, even in locations besides the mosque, whenever prayers are done in a group of two or more with one person leading (imam) and the others following by copying his ritual actions of worship. Friday sermon is most often given by an appointed imam. All mosques have an imam to lead the (congregational) prayers, even though it may sometimes just be a member from the gathered congregation rather than an officially appointed salaried person. The position of women as imams is controversial. The person that should be chosen, according to Hadith, is one who has most knowledge of the Quran and Sunnah (prophetic tradition) and is of good character.
Another well-known use of the term is as an honorary title for a recognized religious scholarly authority in Islam. It is especially used for a jurist (faqīh) and often for the founders of the four Sunni madhhabs or schools of jurisprudence (fiqh), as well as an authority on Quranic exegesis (tafsīr), such as Al-Tabari or Ibn Kathir.
It may also refer to the Muhaddithūn or scholars who created the analytical sciences related to Hadith and sometimes refer to the heads of Muhammad's family in their generational times due to their scholarly authority.
Imams are appointed by the state to work at mosques and they are required to be graduates of an İmam Hatip high school or have a university degree in theology. This is an official position regulated by the Presidency of Religious Affairs in Turkey and only males are appointed to this position, while female officials under the same state organisation work as preachers and Qur'an course tutors, religious services experts, etc. These officials are supposed to belong to the Hanafi school of the Sunni sect.
A central figure in an Islamic movement is also called an imam, like Imam Nawawi in Syria.
In the Shi'a context, an imam is not only presented as the man of God par excellence, but as participating fully in the names, attributes, and acts that theology usually reserves for God alone. Imams have a meaning more central to belief, referring to leaders of the community. Twelver and Ismaili Shi'a believe that these imams are chosen by God to be perfect examples for the faithful and to lead all humanity in all aspects of life. They also believe that all the imams chosen are free from committing any sin, impeccability which is called ismah. These leaders must be followed since they are appointed by God.
Here follows a list of the Twelvers Shia imams:
Fatimah, also Fatimah al-Zahraa, daughter of Muhammed (615–632), is also considered infallible but not an Imam. The Shi'a believe that the last Imam, the 12th Imam Mahdi will one day emerge on the Day of Resurrection (Qiyamah).
At times, imams have held both secular and religious authority. This was the case in Oman among the Kharijite or Ibadi sects. At times, the imams were elected. At other times the position was inherited, as with the Yaruba dynasty from 1624 and 1742. See List of rulers of Oman, the Rustamid dynasty: 776–909, Nabhani dynasty: 1154–1624, the Yaruba dynasty: 1624–1742, the Al Said: 1744–present for further information. The Imamate of Futa Jallon (1727–1896) was a Fulani state in West Africa where secular power alternated between two lines of hereditary Imams, or almami. In the Zaidi Shiite sect, imams were secular as well as spiritual leaders who held power in Yemen for more than a thousand years. In 897, a Zaidi ruler, al-Hadi ila'l-Haqq Yahya, founded a line of such imams, a theocratic form of government which survived until the second half of the 20th century. (See details under Zaidiyyah, History of Yemen, Imams of Yemen.)
Ruhollah Khomeini is officially referred to as Imam in Iran. Several Iranian places and institutions are named "Imam Khomeini", including a city, an international airport, a hospital, and a university. | [
{
"paragraph_id": 0,
"text": "Imam (/ɪˈmɑːm/; Arabic: إمام imām; plural: أئمة aʼimmah) is an Islamic leadership position. For Sunni Muslims, Imam is most commonly used as the title of a prayer leader of a mosque. In this context, imams may lead Islamic prayers, serve as community leaders, and provide religious guidance. Thus for Sunnis, anyone can study the basic Islamic sciences and become an Imam.",
"title": ""
},
{
"paragraph_id": 1,
"text": "For most Shia Muslims, the Imams are absolute infallible leaders of the Islamic community after the Prophet. Shias consider the term to be only applicable to the members and descendants of the Ahl al-Bayt, the family of the Islamic prophet Muhammad. In Twelver Shīʿīsm there are 14 infallibles, 12 of which are Imams, the final being Imam Mahdi who will return at the end of times. The title was also used by the Zaidi Shia Imams of Yemen, who eventually founded the Mutawakkilite Kingdom of Yemen (1918–1970).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Sunni Islam does not have imams in the same sense as the Shi'a, an important distinction often overlooked by those outside of the Islamic religion. In everyday terms, an imam for Sunni Muslims is the one who leads Islamic formal (Fard) prayers, even in locations besides the mosque, whenever prayers are done in a group of two or more with one person leading (imam) and the others following by copying his ritual actions of worship. Friday sermon is most often given by an appointed imam. All mosques have an imam to lead the (congregational) prayers, even though it may sometimes just be a member from the gathered congregation rather than an officially appointed salaried person. The position of women as imams is controversial. The person that should be chosen, according to Hadith, is one who has most knowledge of the Quran and Sunnah (prophetic tradition) and is of good character.",
"title": "Sunni imams"
},
{
"paragraph_id": 3,
"text": "Another well-known use of the term is as an honorary title for a recognized religious scholarly authority in Islam. It is especially used for a jurist (faqīh) and often for the founders of the four Sunni madhhabs or schools of jurisprudence (fiqh), as well as an authority on Quranic exegesis (tafsīr), such as Al-Tabari or Ibn Kathir.",
"title": "Sunni imams"
},
{
"paragraph_id": 4,
"text": "It may also refer to the Muhaddithūn or scholars who created the analytical sciences related to Hadith and sometimes refer to the heads of Muhammad's family in their generational times due to their scholarly authority.",
"title": "Sunni imams"
},
{
"paragraph_id": 5,
"text": "Imams are appointed by the state to work at mosques and they are required to be graduates of an İmam Hatip high school or have a university degree in theology. This is an official position regulated by the Presidency of Religious Affairs in Turkey and only males are appointed to this position, while female officials under the same state organisation work as preachers and Qur'an course tutors, religious services experts, etc. These officials are supposed to belong to the Hanafi school of the Sunni sect.",
"title": "Sunni imams"
},
{
"paragraph_id": 6,
"text": "A central figure in an Islamic movement is also called an imam, like Imam Nawawi in Syria.",
"title": "Sunni imams"
},
{
"paragraph_id": 7,
"text": "In the Shi'a context, an imam is not only presented as the man of God par excellence, but as participating fully in the names, attributes, and acts that theology usually reserves for God alone. Imams have a meaning more central to belief, referring to leaders of the community. Twelver and Ismaili Shi'a believe that these imams are chosen by God to be perfect examples for the faithful and to lead all humanity in all aspects of life. They also believe that all the imams chosen are free from committing any sin, impeccability which is called ismah. These leaders must be followed since they are appointed by God.",
"title": "Shia imams"
},
{
"paragraph_id": 8,
"text": "Here follows a list of the Twelvers Shia imams:",
"title": "Shia imams"
},
{
"paragraph_id": 9,
"text": "Fatimah, also Fatimah al-Zahraa, daughter of Muhammed (615–632), is also considered infallible but not an Imam. The Shi'a believe that the last Imam, the 12th Imam Mahdi will one day emerge on the Day of Resurrection (Qiyamah).",
"title": "Shia imams"
},
{
"paragraph_id": 10,
"text": "At times, imams have held both secular and religious authority. This was the case in Oman among the Kharijite or Ibadi sects. At times, the imams were elected. At other times the position was inherited, as with the Yaruba dynasty from 1624 and 1742. See List of rulers of Oman, the Rustamid dynasty: 776–909, Nabhani dynasty: 1154–1624, the Yaruba dynasty: 1624–1742, the Al Said: 1744–present for further information. The Imamate of Futa Jallon (1727–1896) was a Fulani state in West Africa where secular power alternated between two lines of hereditary Imams, or almami. In the Zaidi Shiite sect, imams were secular as well as spiritual leaders who held power in Yemen for more than a thousand years. In 897, a Zaidi ruler, al-Hadi ila'l-Haqq Yahya, founded a line of such imams, a theocratic form of government which survived until the second half of the 20th century. (See details under Zaidiyyah, History of Yemen, Imams of Yemen.)",
"title": "Imams as secular rulers"
},
{
"paragraph_id": 11,
"text": "Ruhollah Khomeini is officially referred to as Imam in Iran. Several Iranian places and institutions are named \"Imam Khomeini\", including a city, an international airport, a hospital, and a university.",
"title": "Imams as secular rulers"
}
]
| Imam is an Islamic leadership position. For Sunni Muslims, Imam is most commonly used as the title of a prayer leader of a mosque. In this context, imams may lead Islamic prayers, serve as community leaders, and provide religious guidance. Thus for Sunnis, anyone can study the basic Islamic sciences and become an Imam. For most Shia Muslims, the Imams are absolute infallible leaders of the Islamic community after the Prophet. Shias consider the term to be only applicable to the members and descendants of the Ahl al-Bayt, the family of the Islamic prophet Muhammad. In Twelver Shīʿīsm there are 14 infallibles, 12 of which are Imams, the final being Imam Mahdi who will return at the end of times. The title was also used by the Zaidi Shia Imams of Yemen, who eventually founded the Mutawakkilite Kingdom of Yemen (1918–1970). | 2001-11-12T21:34:49Z | 2023-12-15T22:19:36Z | [
"Template:Cite journal",
"Template:Wiktionary inline",
"Template:Portal bar",
"Template:Authority control",
"Template:Distinguish",
"Template:Main",
"Template:Efn",
"Template:Notelist",
"Template:Cite NIE",
"Template:IPAc-en",
"Template:Lang-ar",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Cite encyclopedia",
"Template:Usul al-fiqh",
"Template:Lang",
"Template:Infobox occupation",
"Template:Circa",
"Template:Commons category-inline",
"Template:Sufism terminology",
"Template:Short description",
"Template:Other uses",
"Template:Transliteration",
"Template:Harvnb"
]
| https://en.wikipedia.org/wiki/Imam |
15,242 | Instrument flight rules | In aviation, instrument flight rules (IFR) is one of two sets of regulations governing all aspects of civil aviation aircraft operations; the other is visual flight rules (VFR).
The U.S. Federal Aviation Administration's (FAA) Instrument Flying Handbook defines IFR as: "Rules and regulations established by the FAA to govern flight under conditions in which flight by outside visual reference is not safe. IFR flight depends upon flying by reference to instruments in the flight deck, and navigation is accomplished by reference to electronic signals." It is also a term used by pilots and controllers to indicate the type of flight plan an aircraft is flying, such as an IFR or VFR flight plan.
It is possible and fairly straightforward, in relatively clear weather conditions, to fly an aircraft solely by reference to outside visual cues, such as the horizon to maintain orientation, nearby buildings and terrain features for navigation, and other aircraft to maintain separation. This is known as operating the aircraft under visual flight rules (VFR), and is the most common mode of operation for small aircraft. However, it is safe to fly VFR only when these outside references can be clearly seen from a sufficient distance. When flying through or above clouds, or in fog, rain, dust or similar low-level weather conditions, these references can be obscured. Thus, cloud ceiling and flight visibility are the most important variables for safe operations during all phases of flight. The minimum weather conditions for ceiling and visibility for VFR flights are defined in FAR Part 91.155, and vary depending on the type of airspace in which the aircraft is operating, and on whether the flight is conducted during daytime or nighttime. However, typical daytime VFR minimums for most airspace is 3 statute miles of flight visibility and a distance from clouds of 500 feet below, 1,000 feet above, and 2,000 feet horizontally. Flight conditions reported as equal to or greater than these VFR minimums are referred to as visual meteorological conditions (VMC).
Any aircraft operating under VFR must have the required equipment on board, as described in FAR Part 91.205 (which includes some instruments necessary for IFR flight). VFR pilots may use cockpit instruments as secondary aids to navigation and orientation, but are not required to; the view outside of the aircraft is the primary source for keeping the aircraft straight and level (orientation), flying to the intended destination (navigation), and avoiding obstacles and hazards (separation).
Visual flight rules are generally simpler than instrument flight rules, and require significantly less training and practice. VFR provides a great degree of freedom, allowing pilots to go where they want, when they want, and allows them a much wider latitude in determining how they get there.
When operation of an aircraft under VFR is not safe, because the visual cues outside the aircraft are obscured by weather, instrument flight rules must be used instead. IFR permits an aircraft to operate in instrument meteorological conditions (IMC), which is essentially any weather condition less than VMC but in which aircraft can still operate safely. Use of instrument flight rules is also required when flying in "Class A" airspace regardless of weather conditions. Class A airspace extends from 18,000 feet above mean sea level to flight level 600 (60,000 feet pressure altitude) above the contiguous 48 United States and overlying the waters within 12 miles thereof. Flight in Class A airspace requires pilots and aircraft to be instrument equipped and rated and to be operating under instrument flight rules (IFR). In many countries commercial airliners and their pilots must operate under IFR as the majority of flights enter Class A airspace. Procedures and training are significantly more complex compared to VFR instruction, as a pilot must demonstrate competency in conducting an entire cross-country flight solely by reference to instruments.
Instrument pilots must carefully evaluate weather, create a detailed flight plan based around specific instrument departure, en route, and arrival procedures, and dispatch the flight.
The distance by which an aircraft avoids obstacles or other aircraft is termed separation. The most important concept of IFR flying is that separation is maintained regardless of weather conditions. In controlled airspace, air traffic control (ATC) separates IFR aircraft from obstacles and other aircraft using a flight clearance based on route, time, distance, speed, and altitude. ATC monitors IFR flights on radar, or through aircraft position reports in areas where radar coverage is not available. Aircraft position reports are sent as voice radio transmissions. In the United States, a flight operating under IFR is required to provide position reports unless ATC advises a pilot that the plane is in radar contact. The pilot must resume position reports after ATC advises that radar contact has been lost, or that radar services are terminated.
IFR flights in controlled airspace require an ATC clearance for each part of the flight. A clearance always specifies a clearance limit, which is the farthest the aircraft can fly without a new clearance. In addition, a clearance typically provides a heading or route to follow, altitude, and communication parameters, such as frequencies and transponder codes.
In uncontrolled airspace, ATC clearances are unavailable. In some states a form of separation is provided to certain aircraft in uncontrolled airspace as far as is practical (often known under ICAO as an advisory service in class G airspace), but separation is not mandated nor widely provided.
Despite the protection offered by flight in controlled airspace under IFR, the ultimate responsibility for the safety of the aircraft rests with the pilot in command, who can refuse clearances.
It is essential to differentiate between flight plan type (VFR or IFR) and weather conditions (VMC or IMC). While current and forecast weather may be a factor in deciding which type of flight plan to file, weather conditions themselves do not affect one's filed flight plan. For example, an IFR flight that encounters visual meteorological conditions (VMC) en route does not automatically change to a VFR flight, and the flight must still follow all IFR procedures regardless of weather conditions. In the US, weather conditions are forecast broadly as VFR, MVFR (marginal visual flight rules), IFR, or LIFR (low instrument flight rules).
The main purpose of IFR is the safe operation of aircraft in instrument meteorological conditions (IMC). The weather is considered to be MVFR or IMC when it does not meet the minimum requirements for visual meteorological conditions (VMC). To operate safely in IMC ("actual instrument conditions"), a pilot controls the aircraft relying on flight instruments and ATC provides separation.
It is important not to confuse IFR with IMC. A significant amount of IFR flying is conducted in visual meteorological conditions (VMC). Anytime a flight is operating in VMC and in a volume of airspace in which VFR traffic can operate, the crew is responsible for seeing and avoiding VFR traffic; however, because the flight is conducted under instrument flight rules, ATC still provides separation services from other IFR traffic, and can in many cases also advise the crew of the location of VFR traffic near the flight path.
Although dangerous and illegal, a certain amount of VFR flying is conducted in IMC. A scenario is a VFR pilot taking off in VMC conditions, but encountering deteriorating visibility while en route. Continued VFR flight into IMC can lead to spatial disorientation of the pilot which is the cause of a significant number of general aviation crashes. VFR flight into IMC is distinct from "VFR-on-top", an IFR procedure in which the aircraft operates in VMC using a hybrid of VFR and IFR rules, and "VFR over the top", a VFR procedure in which the aircraft takes off and lands in VMC but flies above an intervening area of IMC. Also possible in many countries is "Special VFR" flight, where an aircraft is explicitly granted permission to operate VFR within the controlled airspace of an airport in conditions technically less than VMC; the pilot asserts they have the necessary visibility to fly despite the weather, must stay in contact with ATC, and cannot leave controlled airspace while still below VMC minimums.
During flight under IFR, there are no visibility requirements, so flying through clouds (or other conditions where there is zero visibility outside the aircraft) is legal and safe. However, there are still minimum weather conditions that must be present in order for the aircraft to take off or to land; these vary according to the kind of operation, the type of navigation aids available, the location and height of terrain and obstructions in the vicinity of the airport, equipment on the aircraft, and the qualifications of the crew. For example, Reno-Tahoe International Airport (KRNO) in a mountainous region has significantly different instrument approaches for aircraft landing on the same runway surface, but from opposite directions. Aircraft approaching from the north must make visual contact with the airport at a higher altitude than when approaching from the south because of rapidly rising terrain south of the airport. This higher altitude allows a flight crew to clear the obstacle if a landing is aborted. In general, each specific instrument approach specifies the minimum weather conditions to permit landing.
Although large airliners, and increasingly, smaller aircraft, carry their own terrain awareness and warning system (TAWS), these are primarily backup systems providing a last layer of defense if a sequence of errors or omissions causes a dangerous situation.
Because IFR flights often take place without visual reference to the ground, a means of navigation other than looking outside the window is required. A number of navigational aids are available to pilots, including ground-based systems such as DME/VORs and NDBs as well as the satellite-based GPS/GNSS system. Air traffic control may assist in navigation by assigning pilots specific headings ("radar vectors"). The majority of IFR navigation is given by ground- and satellite-based systems, while radar vectors are usually reserved by ATC for sequencing aircraft for a busy approach or transitioning aircraft from takeoff to cruise, among other things.
Specific procedures allow IFR aircraft to transition safely through every stage of flight. These procedures specify how an IFR pilot should respond, even in the event of a complete radio failure, and loss of communications with ATC, including the expected aircraft course and altitude.
Departures are described in an IFR clearance issued by ATC prior to takeoff. The departure clearance may contain an assigned heading, one or more waypoints, and an initial altitude to fly. The clearance can also specify a departure procedure (DP) or standard instrument departure (SID) that should be followed unless "NO DP" is specified in the notes section of the filed flight plan.
En route flight is described by IFR charts showing navigation aids, fixes, and standard routes called airways. Aircraft with appropriate navigational equipment such as GPS, are also often cleared for a direct-to routing, where only the destination, or a few navigational waypoints are used to describe the route that the flight will follow. ATC will assign altitudes in its initial clearance or amendments thereto, and navigational charts indicate minimum safe altitudes for airways.
The approach portion of an IFR flight may begin with a standard terminal arrival route (STAR), describing common routes to fly to arrive at an initial approach fix (IAF) from which an instrument approach commences. An instrument approach terminates either by the pilot acquiring sufficient visual reference to proceed to the runway, or with a missed approach because the required visual reference is not seen in time.
To fly under IFR, a pilot must have an instrument rating and must be current (meet recency of experience requirements). In the United States, to file and fly under IFR, a pilot must be instrument-rated and, within the preceding six months, have flown six instrument approaches, as well as holding procedures and course interception and tracking with navaids. Flight under IFR beyond six months after meeting these requirements is not permitted; however, currency may be reestablished within the next six months by completing the requirements above. Beyond the twelfth month, examination ("instrument proficiency check") by an instructor is required.
Practicing instrument approaches can be done either in the instrument meteorological conditions or in visual meteorological conditions – in the latter case, a safety pilot is required so that the pilot practicing instrument approaches can wear a view-limiting device which restricts his field of view to the instrument panel. A safety pilot's primary duty is to observe and avoid other traffic.
In the UK, an IR (UK restricted) - formerly the "IMC rating" - which permits flight under IFR in airspace classes B to G in instrument meteorological conditions, a non-instrument-rated pilot can also elect to fly under IFR in visual meteorological conditions outside controlled airspace. Compared to the rest of the world, the UK's flight crew licensing regime is somewhat unusual in its licensing for meteorological conditions and airspace, rather than flight rules.
The aircraft must be equipped and type-certified for instrument flight, and the related navigational equipment must have been inspected or tested within a specific period of time prior to the instrument flight.
In the United States, instruments required for IFR flight in addition to those that are required for VFR flight are: heading indicator, sensitive altimeter adjustable for barometric pressure, clock with a sweep-second pointer or digital equivalent, attitude indicator, radios and suitable avionics for the route to be flown, alternator or generator, gyroscopic rate-of-turn indicator that is either a turn coordinator or the turn and bank indicator. From 1999 single-engine helicopters could not be FAA-certified for IFR. Recently, however, Bell and Leonardo have certified the single engine helicopters for instrument flight rules. | [
{
"paragraph_id": 0,
"text": "In aviation, instrument flight rules (IFR) is one of two sets of regulations governing all aspects of civil aviation aircraft operations; the other is visual flight rules (VFR).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The U.S. Federal Aviation Administration's (FAA) Instrument Flying Handbook defines IFR as: \"Rules and regulations established by the FAA to govern flight under conditions in which flight by outside visual reference is not safe. IFR flight depends upon flying by reference to instruments in the flight deck, and navigation is accomplished by reference to electronic signals.\" It is also a term used by pilots and controllers to indicate the type of flight plan an aircraft is flying, such as an IFR or VFR flight plan.",
"title": ""
},
{
"paragraph_id": 2,
"text": "It is possible and fairly straightforward, in relatively clear weather conditions, to fly an aircraft solely by reference to outside visual cues, such as the horizon to maintain orientation, nearby buildings and terrain features for navigation, and other aircraft to maintain separation. This is known as operating the aircraft under visual flight rules (VFR), and is the most common mode of operation for small aircraft. However, it is safe to fly VFR only when these outside references can be clearly seen from a sufficient distance. When flying through or above clouds, or in fog, rain, dust or similar low-level weather conditions, these references can be obscured. Thus, cloud ceiling and flight visibility are the most important variables for safe operations during all phases of flight. The minimum weather conditions for ceiling and visibility for VFR flights are defined in FAR Part 91.155, and vary depending on the type of airspace in which the aircraft is operating, and on whether the flight is conducted during daytime or nighttime. However, typical daytime VFR minimums for most airspace is 3 statute miles of flight visibility and a distance from clouds of 500 feet below, 1,000 feet above, and 2,000 feet horizontally. Flight conditions reported as equal to or greater than these VFR minimums are referred to as visual meteorological conditions (VMC).",
"title": "Basic information"
},
{
"paragraph_id": 3,
"text": "Any aircraft operating under VFR must have the required equipment on board, as described in FAR Part 91.205 (which includes some instruments necessary for IFR flight). VFR pilots may use cockpit instruments as secondary aids to navigation and orientation, but are not required to; the view outside of the aircraft is the primary source for keeping the aircraft straight and level (orientation), flying to the intended destination (navigation), and avoiding obstacles and hazards (separation).",
"title": "Basic information"
},
{
"paragraph_id": 4,
"text": "Visual flight rules are generally simpler than instrument flight rules, and require significantly less training and practice. VFR provides a great degree of freedom, allowing pilots to go where they want, when they want, and allows them a much wider latitude in determining how they get there.",
"title": "Basic information"
},
{
"paragraph_id": 5,
"text": "When operation of an aircraft under VFR is not safe, because the visual cues outside the aircraft are obscured by weather, instrument flight rules must be used instead. IFR permits an aircraft to operate in instrument meteorological conditions (IMC), which is essentially any weather condition less than VMC but in which aircraft can still operate safely. Use of instrument flight rules is also required when flying in \"Class A\" airspace regardless of weather conditions. Class A airspace extends from 18,000 feet above mean sea level to flight level 600 (60,000 feet pressure altitude) above the contiguous 48 United States and overlying the waters within 12 miles thereof. Flight in Class A airspace requires pilots and aircraft to be instrument equipped and rated and to be operating under instrument flight rules (IFR). In many countries commercial airliners and their pilots must operate under IFR as the majority of flights enter Class A airspace. Procedures and training are significantly more complex compared to VFR instruction, as a pilot must demonstrate competency in conducting an entire cross-country flight solely by reference to instruments.",
"title": "Basic information"
},
{
"paragraph_id": 6,
"text": "Instrument pilots must carefully evaluate weather, create a detailed flight plan based around specific instrument departure, en route, and arrival procedures, and dispatch the flight.",
"title": "Basic information"
},
{
"paragraph_id": 7,
"text": "The distance by which an aircraft avoids obstacles or other aircraft is termed separation. The most important concept of IFR flying is that separation is maintained regardless of weather conditions. In controlled airspace, air traffic control (ATC) separates IFR aircraft from obstacles and other aircraft using a flight clearance based on route, time, distance, speed, and altitude. ATC monitors IFR flights on radar, or through aircraft position reports in areas where radar coverage is not available. Aircraft position reports are sent as voice radio transmissions. In the United States, a flight operating under IFR is required to provide position reports unless ATC advises a pilot that the plane is in radar contact. The pilot must resume position reports after ATC advises that radar contact has been lost, or that radar services are terminated.",
"title": "Separation and clearance"
},
{
"paragraph_id": 8,
"text": "IFR flights in controlled airspace require an ATC clearance for each part of the flight. A clearance always specifies a clearance limit, which is the farthest the aircraft can fly without a new clearance. In addition, a clearance typically provides a heading or route to follow, altitude, and communication parameters, such as frequencies and transponder codes.",
"title": "Separation and clearance"
},
{
"paragraph_id": 9,
"text": "In uncontrolled airspace, ATC clearances are unavailable. In some states a form of separation is provided to certain aircraft in uncontrolled airspace as far as is practical (often known under ICAO as an advisory service in class G airspace), but separation is not mandated nor widely provided.",
"title": "Separation and clearance"
},
{
"paragraph_id": 10,
"text": "Despite the protection offered by flight in controlled airspace under IFR, the ultimate responsibility for the safety of the aircraft rests with the pilot in command, who can refuse clearances.",
"title": "Separation and clearance"
},
{
"paragraph_id": 11,
"text": "It is essential to differentiate between flight plan type (VFR or IFR) and weather conditions (VMC or IMC). While current and forecast weather may be a factor in deciding which type of flight plan to file, weather conditions themselves do not affect one's filed flight plan. For example, an IFR flight that encounters visual meteorological conditions (VMC) en route does not automatically change to a VFR flight, and the flight must still follow all IFR procedures regardless of weather conditions. In the US, weather conditions are forecast broadly as VFR, MVFR (marginal visual flight rules), IFR, or LIFR (low instrument flight rules).",
"title": "Weather"
},
{
"paragraph_id": 12,
"text": "The main purpose of IFR is the safe operation of aircraft in instrument meteorological conditions (IMC). The weather is considered to be MVFR or IMC when it does not meet the minimum requirements for visual meteorological conditions (VMC). To operate safely in IMC (\"actual instrument conditions\"), a pilot controls the aircraft relying on flight instruments and ATC provides separation.",
"title": "Weather"
},
{
"paragraph_id": 13,
"text": "It is important not to confuse IFR with IMC. A significant amount of IFR flying is conducted in visual meteorological conditions (VMC). Anytime a flight is operating in VMC and in a volume of airspace in which VFR traffic can operate, the crew is responsible for seeing and avoiding VFR traffic; however, because the flight is conducted under instrument flight rules, ATC still provides separation services from other IFR traffic, and can in many cases also advise the crew of the location of VFR traffic near the flight path.",
"title": "Weather"
},
{
"paragraph_id": 14,
"text": "Although dangerous and illegal, a certain amount of VFR flying is conducted in IMC. A scenario is a VFR pilot taking off in VMC conditions, but encountering deteriorating visibility while en route. Continued VFR flight into IMC can lead to spatial disorientation of the pilot which is the cause of a significant number of general aviation crashes. VFR flight into IMC is distinct from \"VFR-on-top\", an IFR procedure in which the aircraft operates in VMC using a hybrid of VFR and IFR rules, and \"VFR over the top\", a VFR procedure in which the aircraft takes off and lands in VMC but flies above an intervening area of IMC. Also possible in many countries is \"Special VFR\" flight, where an aircraft is explicitly granted permission to operate VFR within the controlled airspace of an airport in conditions technically less than VMC; the pilot asserts they have the necessary visibility to fly despite the weather, must stay in contact with ATC, and cannot leave controlled airspace while still below VMC minimums.",
"title": "Weather"
},
{
"paragraph_id": 15,
"text": "During flight under IFR, there are no visibility requirements, so flying through clouds (or other conditions where there is zero visibility outside the aircraft) is legal and safe. However, there are still minimum weather conditions that must be present in order for the aircraft to take off or to land; these vary according to the kind of operation, the type of navigation aids available, the location and height of terrain and obstructions in the vicinity of the airport, equipment on the aircraft, and the qualifications of the crew. For example, Reno-Tahoe International Airport (KRNO) in a mountainous region has significantly different instrument approaches for aircraft landing on the same runway surface, but from opposite directions. Aircraft approaching from the north must make visual contact with the airport at a higher altitude than when approaching from the south because of rapidly rising terrain south of the airport. This higher altitude allows a flight crew to clear the obstacle if a landing is aborted. In general, each specific instrument approach specifies the minimum weather conditions to permit landing.",
"title": "Weather"
},
{
"paragraph_id": 16,
"text": "Although large airliners, and increasingly, smaller aircraft, carry their own terrain awareness and warning system (TAWS), these are primarily backup systems providing a last layer of defense if a sequence of errors or omissions causes a dangerous situation.",
"title": "Weather"
},
{
"paragraph_id": 17,
"text": "Because IFR flights often take place without visual reference to the ground, a means of navigation other than looking outside the window is required. A number of navigational aids are available to pilots, including ground-based systems such as DME/VORs and NDBs as well as the satellite-based GPS/GNSS system. Air traffic control may assist in navigation by assigning pilots specific headings (\"radar vectors\"). The majority of IFR navigation is given by ground- and satellite-based systems, while radar vectors are usually reserved by ATC for sequencing aircraft for a busy approach or transitioning aircraft from takeoff to cruise, among other things.",
"title": "Navigation"
},
{
"paragraph_id": 18,
"text": "Specific procedures allow IFR aircraft to transition safely through every stage of flight. These procedures specify how an IFR pilot should respond, even in the event of a complete radio failure, and loss of communications with ATC, including the expected aircraft course and altitude.",
"title": "Procedures"
},
{
"paragraph_id": 19,
"text": "Departures are described in an IFR clearance issued by ATC prior to takeoff. The departure clearance may contain an assigned heading, one or more waypoints, and an initial altitude to fly. The clearance can also specify a departure procedure (DP) or standard instrument departure (SID) that should be followed unless \"NO DP\" is specified in the notes section of the filed flight plan.",
"title": "Procedures"
},
{
"paragraph_id": 20,
"text": "En route flight is described by IFR charts showing navigation aids, fixes, and standard routes called airways. Aircraft with appropriate navigational equipment such as GPS, are also often cleared for a direct-to routing, where only the destination, or a few navigational waypoints are used to describe the route that the flight will follow. ATC will assign altitudes in its initial clearance or amendments thereto, and navigational charts indicate minimum safe altitudes for airways.",
"title": "Procedures"
},
{
"paragraph_id": 21,
"text": "The approach portion of an IFR flight may begin with a standard terminal arrival route (STAR), describing common routes to fly to arrive at an initial approach fix (IAF) from which an instrument approach commences. An instrument approach terminates either by the pilot acquiring sufficient visual reference to proceed to the runway, or with a missed approach because the required visual reference is not seen in time.",
"title": "Procedures"
},
{
"paragraph_id": 22,
"text": "To fly under IFR, a pilot must have an instrument rating and must be current (meet recency of experience requirements). In the United States, to file and fly under IFR, a pilot must be instrument-rated and, within the preceding six months, have flown six instrument approaches, as well as holding procedures and course interception and tracking with navaids. Flight under IFR beyond six months after meeting these requirements is not permitted; however, currency may be reestablished within the next six months by completing the requirements above. Beyond the twelfth month, examination (\"instrument proficiency check\") by an instructor is required.",
"title": "Qualifications"
},
{
"paragraph_id": 23,
"text": "Practicing instrument approaches can be done either in the instrument meteorological conditions or in visual meteorological conditions – in the latter case, a safety pilot is required so that the pilot practicing instrument approaches can wear a view-limiting device which restricts his field of view to the instrument panel. A safety pilot's primary duty is to observe and avoid other traffic.",
"title": "Qualifications"
},
{
"paragraph_id": 24,
"text": "In the UK, an IR (UK restricted) - formerly the \"IMC rating\" - which permits flight under IFR in airspace classes B to G in instrument meteorological conditions, a non-instrument-rated pilot can also elect to fly under IFR in visual meteorological conditions outside controlled airspace. Compared to the rest of the world, the UK's flight crew licensing regime is somewhat unusual in its licensing for meteorological conditions and airspace, rather than flight rules.",
"title": "Qualifications"
},
{
"paragraph_id": 25,
"text": "The aircraft must be equipped and type-certified for instrument flight, and the related navigational equipment must have been inspected or tested within a specific period of time prior to the instrument flight.",
"title": "Qualifications"
},
{
"paragraph_id": 26,
"text": "In the United States, instruments required for IFR flight in addition to those that are required for VFR flight are: heading indicator, sensitive altimeter adjustable for barometric pressure, clock with a sweep-second pointer or digital equivalent, attitude indicator, radios and suitable avionics for the route to be flown, alternator or generator, gyroscopic rate-of-turn indicator that is either a turn coordinator or the turn and bank indicator. From 1999 single-engine helicopters could not be FAA-certified for IFR. Recently, however, Bell and Leonardo have certified the single engine helicopters for instrument flight rules.",
"title": "Qualifications"
}
]
| In aviation, instrument flight rules (IFR) is one of two sets of regulations governing all aspects of civil aviation aircraft operations; the other is visual flight rules (VFR). The U.S. Federal Aviation Administration's (FAA) Instrument Flying Handbook defines IFR as: "Rules and regulations established by the FAA to govern flight under conditions in which flight by outside visual reference is not safe. IFR flight depends upon flying by reference to instruments in the flight deck, and navigation is accomplished by reference to electronic signals." It is also a term used by pilots and controllers to indicate the type of flight plan an aircraft is flying, such as an IFR or VFR flight plan. | 2001-11-18T20:31:05Z | 2023-08-25T13:33:38Z | [
"Template:Citation",
"Template:Cite web",
"Template:Cite news",
"Template:Spoken Wikipedia",
"Template:Short description",
"Template:Citation needed",
"Template:Cite book",
"Template:Cite journal",
"Template:Main",
"Template:Redirect",
"Template:Multiple issues",
"Template:Unreferenced section",
"Template:Use dmy dates",
"Template:Authority control",
"Template:Reflist"
]
| https://en.wikipedia.org/wiki/Instrument_flight_rules |
15,245 | Ismail Khan | Mohammad Ismail Khan (Dari/Pashto: محمد اسماعیل خان) (born 1946) is an Afghan former politician who served as Minister of Energy and Water from 2005 to 2013 and before that served as the governor of Herat Province. Originally a captain in the national army, he is widely known as a former warlord as he controlled a large mujahideen force, mainly his fellow Tajiks from western Afghanistan, during the Soviet–Afghan War.
His reputation gained him the nickname Lion of Herat. Ismail Khan was a key member of the now exiled political party Jamiat-e Islami and of the now defunct United National Front party. In 2021, Ismail Khan returned to arms to help defend Herat from the Taliban's offensive, which he and the Afghan Army lost. He was then captured by the Taliban forces and then reportedly fled to Iran on 16 August 2021.
Khan was born in or about 1946 in the Shindand District of Herat Province in Afghanistan. His family is from the Chahar-Mahal neighbourhood of Shindand.
In early 1979 Ismail Khan was a Captain in the Afghan National Army based in the western city of Herat. In early March of that year, there was a protest in front of the Communist governor's palace against the arrests and assassinations being carried out in the countryside by the Khalq government. The governor's troops opened fire on the demonstrators, who proceeded to storm the palace and hunt down Soviet advisers. The Herat garrison mutinied and joined the revolt in what is called the Herat uprising, with Ismail Khan and other officers distributing all available weapons to the insurgents. The government led by Nur Mohammed Taraki responded, pulverizing the city using Soviet supplied bombers and killing up to 24,000 citizens in less than a week. This event marked the opening salvo of the rebellion which led to the Soviet military intervention in Afghanistan in December 1979. Ismail Khan escaped to the countryside where he began to assemble a local rebel force.
During the ensuing war, he became the leader of the western command of Burhanuddin Rabbani's Jamiat-e-Islami, political party. With Ahmad Shah Massoud, he was one of the most respected mujahideen leaders. In 1992, three years after the Soviet withdrawal from Afghanistan, the mujahideen captured Herat and Ismail Khan became governor.
In 1995, he successfully defended his province against the Taliban, in cooperation with defense minister Ahmad Shah Massoud. Khan even tried to attack the Taliban stronghold of Kandahar, but was repulsed. Later in September, an ally of the Jamiat, Uzbek General Abdul Rashid Dostum changed sides, and attacked Herat. Ismail Khan was forced to flee to neighboring Iran with 8,000 men and the Taliban took over Herat Province.
Two years later, while organizing opposition to the Taliban in Faryab area, he was betrayed and captured by Abdul Majid Rouzi who had defected to the Taliban along with Abdul Malik Pahlawan, then one of Dostum's deputies. Then in March 1999 he escaped from Kandahar prison. During the U.S. intervention in Afghanistan, he fought against the Taliban within the United Islamic Front for the Salvation of Afghanistan (Northern Alliance) and thus regained his position as Governor of Herat after they were victorious in December 2001.
After returning to Herat, Ismail Khan quickly consolidated his control over the region. He took over control of the city from the local ulema and quickly established control over the trade route between Herat and Iran, a large source of revenue. As Emir of Herat, Ismail Khan exercised great autonomy, providing social welfare for Heratis, expanding his power into neighbouring provinces, and maintaining direct international contacts. Although hated by the educated in Herat and often accused of human rights abuses, Ismail Khan's regime provided security, paid government employees, and made investments in public services. However, during his tenure as governor, Ismail Khan was accused of ruling his province like a private fiefdom, leading to increasing tensions with the Afghan Transitional Administration. In particular, he refused to pass on to the government the revenues gained from custom taxes on goods from Iran and Turkmenistan.
On 13 August 2003, President Karzai removed Governor Ismail Khan from his command of the 4th Corps. This was announced as part of a programme removing the ability of officials to hold both civilian and military posts.
Ismail Khan was ultimately removed from power in March 2004 due to pressure by neighbouring warlords and the central Afghan government. Various sources have presented different versions of the story, and the exact dynamics cannot be known with certainty. What is known is that Ismail Khan found himself at odds with a few regional commanders who, although theoretically his subordinates, attempted to remove him from power. Ismail Khan claims that these efforts began with a botched assassination attempt. Afterwards, these commanders moved their forces near Herat. Ismail Khan, unpopular with the Herati military class, was slow to mobilise his forces, perhaps waiting for the threat to Herat to become existential as a means to motivate his forces. However, the conflict was stopped with the intervention of International Security Assistance Force forces and soldiers of the Afghan National Army, freezing the conflict in its tracks. Ismail Khan's forces even fought skirmishes with the Afghan National Army, in which his son, Mirwais Sadiq was killed. Because Ismail Khan was contained by the Afghan National Army, the warlords who opposed him were quickly able to occupy strategic locations unopposed. Ismail Khan was forced to give up his governorship and to go to Kabul, where he served in Hamid Karzai's cabinet as the Minister of Energy.
In 2005 Ismail Khan became the Minister of Water and Energy.
In late 2012, the Government of Afghanistan accused Ismail Khan of illegally distributing weapons to his supporters. About 40 members of the country's Parliament requested Ismail Khan to answer their queries. The government believes that Khan is attempting to create some kind of disruption in the country.
On September 27, 2009, Ismail Khan survived a suicide blast that killed 4 of his bodyguards in Herat, in western Afghanistan. He was driving to Herat Airport when a powerful explosion occurred on the way there. Taliban spokesman, Zabiullah Mujahid, claimed responsibility and said the target was Khan.
Guantanamo captive Abdul Razzaq Hekmati requested Ismail Khan's testimony, when he was called before a Combatant Status Review Tribunal. Ismail Khan, like Afghan Minister of Defense Rahim Wardak, was one of the high-profile Afghans that those conducting the Tribunals ruled were "not reasonably available" to give a statement on a captive's behalf because they could not be located.
Hekmati had played a key role in helping Ismail Khan escape from the Taliban in 1999. Hekmati stood accused of helping Taliban leaders escape from the custody of Hamid Karzai's government.
Carlotta Gall and Andy Worthington interviewed Ismail Khan for a new The New York Times article after Hekmati died of cancer in Guantanamo. According to the New York Times Ismail Khan said he personally buttonholed the American ambassador to tell him that Hekmati was innocent, and should be released. In contrast, Hekmati was told that the State Department had been unable to locate Khan.
In July 2021, Ismail Khan mobilized hundreds of his loyalists in Herat in support of the Afghan Armed Forces to defend the city from an offensive by the Taliban. Despite this, the city fell on 12 August 2021. After trying to escape by helicopter, Khan was captured by the Taliban. The Taliban interviewed him shortly after and claimed that he and his forces have joined them. After negotiating with the Taliban, he was allowed to return to his residence.
After leaving Taliban custody, as of August 2021 Khan is living in Mashhad, Iran. He said that a conspiracy was responsible for Herat being captured by the Taliban.
Ismail Khan is a controversial figure. Reporters Without Borders has charged him with muzzling the press and ordering attacks on journalists. Also Human Rights Watch has accused him of human rights abuses.
Nevertheless, he remains a popular figure for some in Afghanistan. Unlike other mujahideen commanders, Khan has not been linked to large-scale massacres and atrocities such as those committed after the capture of Kabul in 1992. Following news of his dismissal, rioting broke out in the streets of Herat, and President Karzai had to ask him to make a personal appeal for calm. | [
{
"paragraph_id": 0,
"text": "Mohammad Ismail Khan (Dari/Pashto: محمد اسماعیل خان) (born 1946) is an Afghan former politician who served as Minister of Energy and Water from 2005 to 2013 and before that served as the governor of Herat Province. Originally a captain in the national army, he is widely known as a former warlord as he controlled a large mujahideen force, mainly his fellow Tajiks from western Afghanistan, during the Soviet–Afghan War.",
"title": ""
},
{
"paragraph_id": 1,
"text": "His reputation gained him the nickname Lion of Herat. Ismail Khan was a key member of the now exiled political party Jamiat-e Islami and of the now defunct United National Front party. In 2021, Ismail Khan returned to arms to help defend Herat from the Taliban's offensive, which he and the Afghan Army lost. He was then captured by the Taliban forces and then reportedly fled to Iran on 16 August 2021.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Khan was born in or about 1946 in the Shindand District of Herat Province in Afghanistan. His family is from the Chahar-Mahal neighbourhood of Shindand.",
"title": "Early years and rise to power"
},
{
"paragraph_id": 3,
"text": "In early 1979 Ismail Khan was a Captain in the Afghan National Army based in the western city of Herat. In early March of that year, there was a protest in front of the Communist governor's palace against the arrests and assassinations being carried out in the countryside by the Khalq government. The governor's troops opened fire on the demonstrators, who proceeded to storm the palace and hunt down Soviet advisers. The Herat garrison mutinied and joined the revolt in what is called the Herat uprising, with Ismail Khan and other officers distributing all available weapons to the insurgents. The government led by Nur Mohammed Taraki responded, pulverizing the city using Soviet supplied bombers and killing up to 24,000 citizens in less than a week. This event marked the opening salvo of the rebellion which led to the Soviet military intervention in Afghanistan in December 1979. Ismail Khan escaped to the countryside where he began to assemble a local rebel force.",
"title": "Early years and rise to power"
},
{
"paragraph_id": 4,
"text": "During the ensuing war, he became the leader of the western command of Burhanuddin Rabbani's Jamiat-e-Islami, political party. With Ahmad Shah Massoud, he was one of the most respected mujahideen leaders. In 1992, three years after the Soviet withdrawal from Afghanistan, the mujahideen captured Herat and Ismail Khan became governor.",
"title": "Early years and rise to power"
},
{
"paragraph_id": 5,
"text": "In 1995, he successfully defended his province against the Taliban, in cooperation with defense minister Ahmad Shah Massoud. Khan even tried to attack the Taliban stronghold of Kandahar, but was repulsed. Later in September, an ally of the Jamiat, Uzbek General Abdul Rashid Dostum changed sides, and attacked Herat. Ismail Khan was forced to flee to neighboring Iran with 8,000 men and the Taliban took over Herat Province.",
"title": "Early years and rise to power"
},
{
"paragraph_id": 6,
"text": "Two years later, while organizing opposition to the Taliban in Faryab area, he was betrayed and captured by Abdul Majid Rouzi who had defected to the Taliban along with Abdul Malik Pahlawan, then one of Dostum's deputies. Then in March 1999 he escaped from Kandahar prison. During the U.S. intervention in Afghanistan, he fought against the Taliban within the United Islamic Front for the Salvation of Afghanistan (Northern Alliance) and thus regained his position as Governor of Herat after they were victorious in December 2001.",
"title": "Early years and rise to power"
},
{
"paragraph_id": 7,
"text": "After returning to Herat, Ismail Khan quickly consolidated his control over the region. He took over control of the city from the local ulema and quickly established control over the trade route between Herat and Iran, a large source of revenue. As Emir of Herat, Ismail Khan exercised great autonomy, providing social welfare for Heratis, expanding his power into neighbouring provinces, and maintaining direct international contacts. Although hated by the educated in Herat and often accused of human rights abuses, Ismail Khan's regime provided security, paid government employees, and made investments in public services. However, during his tenure as governor, Ismail Khan was accused of ruling his province like a private fiefdom, leading to increasing tensions with the Afghan Transitional Administration. In particular, he refused to pass on to the government the revenues gained from custom taxes on goods from Iran and Turkmenistan.",
"title": "Karzai administration and return to Afghanistan"
},
{
"paragraph_id": 8,
"text": "On 13 August 2003, President Karzai removed Governor Ismail Khan from his command of the 4th Corps. This was announced as part of a programme removing the ability of officials to hold both civilian and military posts.",
"title": "Karzai administration and return to Afghanistan"
},
{
"paragraph_id": 9,
"text": "Ismail Khan was ultimately removed from power in March 2004 due to pressure by neighbouring warlords and the central Afghan government. Various sources have presented different versions of the story, and the exact dynamics cannot be known with certainty. What is known is that Ismail Khan found himself at odds with a few regional commanders who, although theoretically his subordinates, attempted to remove him from power. Ismail Khan claims that these efforts began with a botched assassination attempt. Afterwards, these commanders moved their forces near Herat. Ismail Khan, unpopular with the Herati military class, was slow to mobilise his forces, perhaps waiting for the threat to Herat to become existential as a means to motivate his forces. However, the conflict was stopped with the intervention of International Security Assistance Force forces and soldiers of the Afghan National Army, freezing the conflict in its tracks. Ismail Khan's forces even fought skirmishes with the Afghan National Army, in which his son, Mirwais Sadiq was killed. Because Ismail Khan was contained by the Afghan National Army, the warlords who opposed him were quickly able to occupy strategic locations unopposed. Ismail Khan was forced to give up his governorship and to go to Kabul, where he served in Hamid Karzai's cabinet as the Minister of Energy.",
"title": "Karzai administration and return to Afghanistan"
},
{
"paragraph_id": 10,
"text": "In 2005 Ismail Khan became the Minister of Water and Energy.",
"title": "Karzai administration and return to Afghanistan"
},
{
"paragraph_id": 11,
"text": "In late 2012, the Government of Afghanistan accused Ismail Khan of illegally distributing weapons to his supporters. About 40 members of the country's Parliament requested Ismail Khan to answer their queries. The government believes that Khan is attempting to create some kind of disruption in the country.",
"title": "Karzai administration and return to Afghanistan"
},
{
"paragraph_id": 12,
"text": "On September 27, 2009, Ismail Khan survived a suicide blast that killed 4 of his bodyguards in Herat, in western Afghanistan. He was driving to Herat Airport when a powerful explosion occurred on the way there. Taliban spokesman, Zabiullah Mujahid, claimed responsibility and said the target was Khan.",
"title": "Assassination attempt"
},
{
"paragraph_id": 13,
"text": "Guantanamo captive Abdul Razzaq Hekmati requested Ismail Khan's testimony, when he was called before a Combatant Status Review Tribunal. Ismail Khan, like Afghan Minister of Defense Rahim Wardak, was one of the high-profile Afghans that those conducting the Tribunals ruled were \"not reasonably available\" to give a statement on a captive's behalf because they could not be located.",
"title": "Assassination attempt"
},
{
"paragraph_id": 14,
"text": "Hekmati had played a key role in helping Ismail Khan escape from the Taliban in 1999. Hekmati stood accused of helping Taliban leaders escape from the custody of Hamid Karzai's government.",
"title": "Assassination attempt"
},
{
"paragraph_id": 15,
"text": "Carlotta Gall and Andy Worthington interviewed Ismail Khan for a new The New York Times article after Hekmati died of cancer in Guantanamo. According to the New York Times Ismail Khan said he personally buttonholed the American ambassador to tell him that Hekmati was innocent, and should be released. In contrast, Hekmati was told that the State Department had been unable to locate Khan.",
"title": "Assassination attempt"
},
{
"paragraph_id": 16,
"text": "In July 2021, Ismail Khan mobilized hundreds of his loyalists in Herat in support of the Afghan Armed Forces to defend the city from an offensive by the Taliban. Despite this, the city fell on 12 August 2021. After trying to escape by helicopter, Khan was captured by the Taliban. The Taliban interviewed him shortly after and claimed that he and his forces have joined them. After negotiating with the Taliban, he was allowed to return to his residence.",
"title": "2021 Taliban offensive and capture"
},
{
"paragraph_id": 17,
"text": "After leaving Taliban custody, as of August 2021 Khan is living in Mashhad, Iran. He said that a conspiracy was responsible for Herat being captured by the Taliban.",
"title": "2021 Taliban offensive and capture"
},
{
"paragraph_id": 18,
"text": "Ismail Khan is a controversial figure. Reporters Without Borders has charged him with muzzling the press and ordering attacks on journalists. Also Human Rights Watch has accused him of human rights abuses.",
"title": "Controversy"
},
{
"paragraph_id": 19,
"text": "Nevertheless, he remains a popular figure for some in Afghanistan. Unlike other mujahideen commanders, Khan has not been linked to large-scale massacres and atrocities such as those committed after the capture of Kabul in 1992. Following news of his dismissal, rioting broke out in the streets of Herat, and President Karzai had to ask him to make a personal appeal for calm.",
"title": "Controversy"
}
]
| Mohammad Ismail Khan is an Afghan former politician who served as Minister of Energy and Water from 2005 to 2013 and before that served as the governor of Herat Province. Originally a captain in the national army, he is widely known as a former warlord as he controlled a large mujahideen force, mainly his fellow Tajiks from western Afghanistan, during the Soviet–Afghan War. His reputation gained him the nickname Lion of Herat. Ismail Khan was a key member of the now exiled political party Jamiat-e Islami and of the now defunct United National Front party. In 2021, Ismail Khan returned to arms to help defend Herat from the Taliban's offensive, which he and the Afghan Army lost. He was then captured by the Taliban forces and then reportedly fled to Iran on 16 August 2021. | 2001-11-13T21:58:32Z | 2023-11-24T07:14:28Z | [
"Template:S-end",
"Template:For",
"Template:Reflist",
"Template:Cite news",
"Template:Webarchive",
"Template:Cite press release",
"Template:S-bef",
"Template:Short description",
"Template:Refend",
"Template:S-start",
"Template:S-ttl",
"Template:S-aft",
"Template:Use dmy dates",
"Template:Infobox officeholder",
"Template:Refbegin",
"Template:Cite web",
"Template:Commons category",
"Template:Cite book",
"Template:Cite tweet",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Ismail_Khan |
15,250 | Indigo | Indigo is a deep color close to the color wheel blue (a primary color in the RGB color space), as well as to some variants of ultramarine, based on the ancient dye of the same name. The word "indigo" comes from the Latin word indicum, meaning "Indian", as the plant-based dye was originally exported to Europe from India.
It is traditionally regarded as a color in the visible spectrum, as well as one of the seven colors of the rainbow: the color between blue and violet; however, sources differ as to its actual position in the electromagnetic spectrum.
The first known recorded use of indigo as a color name in English was in 1289.
Indigofera tinctoria and related species were cultivated in East Asia, Egypt, India, Bangladesh and Peru in antiquity. The early evidence for the use of indigo dates to around 4000 BC and comes from Huaca Prieta, in contemporary Peru. Pliny the Elder mentions India as the source of the dye after which it was named. It was imported from there in small quantities via the Silk Road.
Indigo Dye
Idigo dye is a blue color, obtained from many different types of plants: the indigo plant or (Indigofera Tinctoria) often called "True Indigo" probably produces the best results. Although several are close; Japanese indigo, (Polygonum Tinctoria), Natal indigo (Indigofera arrecta), and Guatemalan indigo (Indigofera suffruticosa), the Chinese indigo (Persicaria tinctoria).
In early Europe the main source was from the woad plant Isatis tinctoria, also known as pastel. For a long time, woad was the main source of blue dye in Europe. Woad was replaced by "true indigo", as trade routes opened up. Plant sources have now been largely replaced by synthetic dyes. Except in artisanal works such as Shibori an ancient method of tying and stitching to block the dye which has had quite a revival in recent years.
e Early Modern English word indigo referred to the dye, not to the color (hue) itself, and indigo is not traditionally part of the basic color-naming system.
Isaac Newton introduced indigo as one of the seven base colors of his work. In the mid-1660s, when Newton bought a pair of prisms at a fair near Cambridge, the East India Company had begun importing indigo dye into England, supplanting the homegrown woad as source of blue dye. In a pivotal experiment in the history of optics, the young Newton shone a narrow beam of sunlight through a prism to produce a rainbow-like band of colors on the wall. In describing this optical spectrum, Newton acknowledged that the spectrum had a continuum of colors, but named seven: "The originall or primary colours are Red, yellow, Green, Blew, & a violet purple; together with Orang, Indico, & an indefinite varietie of intermediate gradations." He linked the seven prismatic colors to the seven notes of a western major scale, as shown in his color wheel, with orange and indigo as the semitones. Having decided upon seven colors, he asked a friend to repeatedly divide up the spectrum that was projected from the prism onto the wall:
I desired a friend to draw with a pencil lines cross the image, or pillar of colours, where every one of the seven aforenamed colours was most full and brisk, and also where he judged the truest confines of them to be, whilst I held the paper so, that the said image might fall within a certain compass marked on it. And this I did, partly because my own eyes are not very critical in distinguishing colours, partly because another, to whom I had not communicated my thoughts about this matter, could have nothing but his eyes to determine his fancy in making those marks.
Indigo is therefore counted as one of the traditional colors of the rainbow, the order of which is given by the mnemonics "Richard of York gave battle in vain" and Roy G. Biv. James Clerk Maxwell and Hermann von Helmholtz accepted indigo as an appropriate name for the color flanking violet in the spectrum.
Later scientists concluded that Newton named the colors differently from current usage. According to Gary Waldman, "A careful reading of Newton's work indicates that the color he called indigo, we would normally call blue; his blue is then what we would name blue-green or cyan." If this is true, Newton's seven spectral colors would have been:
The human eye does not readily differentiate hues in the wavelengths between what are now called blue and violet. If this is where Newton meant indigo to lie, most individuals would have difficulty distinguishing indigo from its neighbors. According to Isaac Asimov, "It is customary to list indigo as a color lying between blue and violet, but it has never seemed to me that indigo is worth the dignity of being considered a separate color. To my eyes, it seems merely deep blue."
In 1821, Abraham Werner published Werner's Nomenclature of Colours, where indigo, called indigo blue, is classified as a blue hue, and not listed among the violet hues. He writes that the color is composed of "Berlin blue, a little black, and a small portion of apple green," and indicating it is the color of blue copper ore, with Berlin blue being described as the color of a blue jay's wing, a hepatica flower, or a blue sapphire.
According to an article, Definition of the Color Indigo published in Nature magazine in the late 1800s, Newton's use of the term "indigo" referred to a spectral color between blue and violet. However, the article states that Wilhelm von Bezold, in his treatise on color, disagreed with Newton's use of the term, on the basis that the pigment indigo was a darker hue than the spectral color; and furthermore, Professor Ogden Rood points out that indigo pigment corresponds to the cyan-blue region of the spectrum, lying between blue and green, although darker in hue. Rood considers that artificial ultramarine pigment is closer to the point of the spectrum described as "indigo", and proposed renaming that spectral point as "ultramarine". The article goes on to state that comparison of the pigments, both dry and wet, with Maxwell's discs and with the spectrum, that indigo is almost identical to Prussian blue, stating that it "certainly does not lie on the violet side of 'blue.'" When scraped, a lump of indigo pigment appears more violet, and if powdered or dissolved, becomes greenish.
Several modern sources place indigo in the electromagnetic spectrum between 420 and 450 nanometers, which lies on the short-wave side of color wheel (RGB) blue, towards (spectral) violet.
The correspondence of this definition with colors of actual indigo dyes, though, is disputed. Optical scientists Hardy and Perrin list indigo as between 445 and 464 nm wavelength, which occupies a spectrum segment from roughly the color wheel (RGB) blue extending to the long-wave side, towards azure.
Other modern color scientists, such as Bohren and Clothiaux (2006), and J.W.G. Hunt (1980), divide the spectrum between violet and blue at about 450 nm, with no hue specifically named indigo.
Towards the end of the 20th century, purple colors also became referred to as "indigo". In the 1980s, computer programmers Jim Gettys, Paul Ravelling, John C. Thomas and Jim Fulton produced a list of colors for the X Window Operating System. The color identified as "indigo" is actually a dark purple hue; the programmers assigned it the hex code #4B0082 , which was not related to the color indigo as generally understood at the time. The list which came with version X11 of the operating system became the basis of the CSS and html color rendition used in websites and web design. This collection of color names is somewhat arbitrary: Thomas used a box of 72 Crayola crayons as a standard, whereas Ravelling used color swabs from the now-defunct Sinclair Paints company, resulting in the X11 color list containing fanciful color names such as "papaya whip", "blanched almond" and "peach puff". The database was also criticised for its many inconsistencies, such as "dark gray" being lighter than "gray", and for the color distribution being uneven, tending towards reds and greens at the expense of blues. Physics author John Spacey writes on the website Simplicable that the X11 programmers did not have any background in color theory, and that as these names are used by web designers and graphic designers, the name indigo has since that time been strongly associated with purple or violet. Spacey writes, "As such, a few programmers accidentally repurposed a color name that was known to civilisations for thousands of years."
The Crayola company released an indigo crayon in 1999, with the Crayola website using the hex code #4F49C6 to approximate the crayon color. The 2001 iron indigo crayon is portrayed using hex code #184FA1 , the 2004 indigo crayon color uses #5D76CB , the 2019 iridescent indigo uses #3C32CD .
Like many other colors (orange, rose, and violet are the best-known), indigo gets its name from an object in the natural world—the plant named indigo once used for dyeing cloth (see also Indigo dye).
The color pigment indigo is equivalent to the web color indigo and approximates the color indigo that is usually reproduced in pigments and colored pencils.
The color of indigo dye is a different color from either spectrum indigo or pigment indigo. This is the actual color of the dye. A vat full of this dye is a darker color, approximating the web color midnight blue.
The color "electric indigo" is a bright and saturated color between the traditional indigo and violet. This is the brightest color indigo that can be approximated on a computer screen; it is a color located between the (primary) blue and the color violet of the RGB color wheel.
The web color blue violet or deep indigo is a tone of indigo brighter than pigment indigo, but not as bright as electric indigo.
Below are displayed several colors now referred to as indigo, some of which have been named as indigo since the adoption of html color names in the world wide web era.
"Electric indigo" is brighter than the pigment indigo reproduced above. When plotted on the CIE chromaticity diagram, this color is at 435 nanometers, in the middle of the portion of the spectrum traditionally considered indigo, i.e., between 450 and 420 nanometers. This color is only an approximation of spectral indigo, since actual spectral colors are outside the gamut of the sRGB color system.
At right is displayed the web color "blue-violet", a color intermediate in brightness between electric indigo and pigment indigo. It is also known as "deep indigo".
The color box on the right displays the web color indigo, the color indigo as it would be reproduced by artists' paints as opposed to the brighter indigo above (electric indigo) that is possible to reproduce on a computer screen. Its hue is closer to violet than to indigo dye for which the color is named. Pigment indigo can be obtained by mixing 55% pigment cyan with about 45% pigment magenta.
Compare the subtractive colors to the additive colors in the two primary color charts in the article on primary colors to see the distinction between electric colors as reproducible from light on a computer screen (additive colors) and the pigment colors reproducible with pigments (subtractive colors); the additive colors are significantly brighter because they are produced from light instead of pigment.
Web color indigo represents the way the color indigo was always reproduced in pigments, paints, or colored pencils in the 1950s. By the 1970s, because of the advent of psychedelic art, artists became accustomed to brighter pigments. Pigments called "bright indigo" or "bright blue-violet" (the pigment equivalent of the electric indigo reproduced in the section above) became available in artists' pigments and colored pencils.
'Tropical Indigo' is the color that is called añil in the Guía de coloraciones (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm.
Marina Warner's novel Indigo (1992) is a retelling of Shakespeare's The Tempest and features the production of indigo dye by Sycorax.
The French Army adopted dark blue indigo at the time of the French Revolution, as a replacement for the white uniforms previously worn by the Royal infantry regiments. In 1806, Napoleon decided to restore the white coats because of shortages of indigo dye imposed by the British continental blockade. However, the greater practicability of the blue color led to its retention, and indigo remained the dominant color of French military coats until 1914.
In the Better Call Saul episode "Hero", Howard Hamlin mentions that his law firm Hamlin Hamlin & McGill trademarked a colour called "Hamlindigo" whilst confronting Jimmy McGill over trademark infringement in a billboard advertisement he produced for his own legal services.
The spiritualist applications use electric indigo, because the color is positioned between blue and violet on the spectrum. | [
{
"paragraph_id": 0,
"text": "Indigo is a deep color close to the color wheel blue (a primary color in the RGB color space), as well as to some variants of ultramarine, based on the ancient dye of the same name. The word \"indigo\" comes from the Latin word indicum, meaning \"Indian\", as the plant-based dye was originally exported to Europe from India.",
"title": ""
},
{
"paragraph_id": 1,
"text": "It is traditionally regarded as a color in the visible spectrum, as well as one of the seven colors of the rainbow: the color between blue and violet; however, sources differ as to its actual position in the electromagnetic spectrum.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The first known recorded use of indigo as a color name in English was in 1289.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Indigofera tinctoria and related species were cultivated in East Asia, Egypt, India, Bangladesh and Peru in antiquity. The early evidence for the use of indigo dates to around 4000 BC and comes from Huaca Prieta, in contemporary Peru. Pliny the Elder mentions India as the source of the dye after which it was named. It was imported from there in small quantities via the Silk Road.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Indigo Dye",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Idigo dye is a blue color, obtained from many different types of plants: the indigo plant or (Indigofera Tinctoria) often called \"True Indigo\" probably produces the best results. Although several are close; Japanese indigo, (Polygonum Tinctoria), Natal indigo (Indigofera arrecta), and Guatemalan indigo (Indigofera suffruticosa), the Chinese indigo (Persicaria tinctoria).",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In early Europe the main source was from the woad plant Isatis tinctoria, also known as pastel. For a long time, woad was the main source of blue dye in Europe. Woad was replaced by \"true indigo\", as trade routes opened up. Plant sources have now been largely replaced by synthetic dyes. Except in artisanal works such as Shibori an ancient method of tying and stitching to block the dye which has had quite a revival in recent years.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "e Early Modern English word indigo referred to the dye, not to the color (hue) itself, and indigo is not traditionally part of the basic color-naming system.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Isaac Newton introduced indigo as one of the seven base colors of his work. In the mid-1660s, when Newton bought a pair of prisms at a fair near Cambridge, the East India Company had begun importing indigo dye into England, supplanting the homegrown woad as source of blue dye. In a pivotal experiment in the history of optics, the young Newton shone a narrow beam of sunlight through a prism to produce a rainbow-like band of colors on the wall. In describing this optical spectrum, Newton acknowledged that the spectrum had a continuum of colors, but named seven: \"The originall or primary colours are Red, yellow, Green, Blew, & a violet purple; together with Orang, Indico, & an indefinite varietie of intermediate gradations.\" He linked the seven prismatic colors to the seven notes of a western major scale, as shown in his color wheel, with orange and indigo as the semitones. Having decided upon seven colors, he asked a friend to repeatedly divide up the spectrum that was projected from the prism onto the wall:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "I desired a friend to draw with a pencil lines cross the image, or pillar of colours, where every one of the seven aforenamed colours was most full and brisk, and also where he judged the truest confines of them to be, whilst I held the paper so, that the said image might fall within a certain compass marked on it. And this I did, partly because my own eyes are not very critical in distinguishing colours, partly because another, to whom I had not communicated my thoughts about this matter, could have nothing but his eyes to determine his fancy in making those marks.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Indigo is therefore counted as one of the traditional colors of the rainbow, the order of which is given by the mnemonics \"Richard of York gave battle in vain\" and Roy G. Biv. James Clerk Maxwell and Hermann von Helmholtz accepted indigo as an appropriate name for the color flanking violet in the spectrum.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Later scientists concluded that Newton named the colors differently from current usage. According to Gary Waldman, \"A careful reading of Newton's work indicates that the color he called indigo, we would normally call blue; his blue is then what we would name blue-green or cyan.\" If this is true, Newton's seven spectral colors would have been:",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The human eye does not readily differentiate hues in the wavelengths between what are now called blue and violet. If this is where Newton meant indigo to lie, most individuals would have difficulty distinguishing indigo from its neighbors. According to Isaac Asimov, \"It is customary to list indigo as a color lying between blue and violet, but it has never seemed to me that indigo is worth the dignity of being considered a separate color. To my eyes, it seems merely deep blue.\"",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1821, Abraham Werner published Werner's Nomenclature of Colours, where indigo, called indigo blue, is classified as a blue hue, and not listed among the violet hues. He writes that the color is composed of \"Berlin blue, a little black, and a small portion of apple green,\" and indicating it is the color of blue copper ore, with Berlin blue being described as the color of a blue jay's wing, a hepatica flower, or a blue sapphire.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "According to an article, Definition of the Color Indigo published in Nature magazine in the late 1800s, Newton's use of the term \"indigo\" referred to a spectral color between blue and violet. However, the article states that Wilhelm von Bezold, in his treatise on color, disagreed with Newton's use of the term, on the basis that the pigment indigo was a darker hue than the spectral color; and furthermore, Professor Ogden Rood points out that indigo pigment corresponds to the cyan-blue region of the spectrum, lying between blue and green, although darker in hue. Rood considers that artificial ultramarine pigment is closer to the point of the spectrum described as \"indigo\", and proposed renaming that spectral point as \"ultramarine\". The article goes on to state that comparison of the pigments, both dry and wet, with Maxwell's discs and with the spectrum, that indigo is almost identical to Prussian blue, stating that it \"certainly does not lie on the violet side of 'blue.'\" When scraped, a lump of indigo pigment appears more violet, and if powdered or dissolved, becomes greenish.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Several modern sources place indigo in the electromagnetic spectrum between 420 and 450 nanometers, which lies on the short-wave side of color wheel (RGB) blue, towards (spectral) violet.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The correspondence of this definition with colors of actual indigo dyes, though, is disputed. Optical scientists Hardy and Perrin list indigo as between 445 and 464 nm wavelength, which occupies a spectrum segment from roughly the color wheel (RGB) blue extending to the long-wave side, towards azure.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Other modern color scientists, such as Bohren and Clothiaux (2006), and J.W.G. Hunt (1980), divide the spectrum between violet and blue at about 450 nm, with no hue specifically named indigo.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Towards the end of the 20th century, purple colors also became referred to as \"indigo\". In the 1980s, computer programmers Jim Gettys, Paul Ravelling, John C. Thomas and Jim Fulton produced a list of colors for the X Window Operating System. The color identified as \"indigo\" is actually a dark purple hue; the programmers assigned it the hex code #4B0082 , which was not related to the color indigo as generally understood at the time. The list which came with version X11 of the operating system became the basis of the CSS and html color rendition used in websites and web design. This collection of color names is somewhat arbitrary: Thomas used a box of 72 Crayola crayons as a standard, whereas Ravelling used color swabs from the now-defunct Sinclair Paints company, resulting in the X11 color list containing fanciful color names such as \"papaya whip\", \"blanched almond\" and \"peach puff\". The database was also criticised for its many inconsistencies, such as \"dark gray\" being lighter than \"gray\", and for the color distribution being uneven, tending towards reds and greens at the expense of blues. Physics author John Spacey writes on the website Simplicable that the X11 programmers did not have any background in color theory, and that as these names are used by web designers and graphic designers, the name indigo has since that time been strongly associated with purple or violet. Spacey writes, \"As such, a few programmers accidentally repurposed a color name that was known to civilisations for thousands of years.\"",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Crayola company released an indigo crayon in 1999, with the Crayola website using the hex code #4F49C6 to approximate the crayon color. The 2001 iron indigo crayon is portrayed using hex code #184FA1 , the 2004 indigo crayon color uses #5D76CB , the 2019 iridescent indigo uses #3C32CD .",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Like many other colors (orange, rose, and violet are the best-known), indigo gets its name from an object in the natural world—the plant named indigo once used for dyeing cloth (see also Indigo dye).",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 21,
"text": "The color pigment indigo is equivalent to the web color indigo and approximates the color indigo that is usually reproduced in pigments and colored pencils.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 22,
"text": "The color of indigo dye is a different color from either spectrum indigo or pigment indigo. This is the actual color of the dye. A vat full of this dye is a darker color, approximating the web color midnight blue.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 23,
"text": "The color \"electric indigo\" is a bright and saturated color between the traditional indigo and violet. This is the brightest color indigo that can be approximated on a computer screen; it is a color located between the (primary) blue and the color violet of the RGB color wheel.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 24,
"text": "The web color blue violet or deep indigo is a tone of indigo brighter than pigment indigo, but not as bright as electric indigo.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 25,
"text": "Below are displayed several colors now referred to as indigo, some of which have been named as indigo since the adoption of html color names in the world wide web era.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 26,
"text": "\"Electric indigo\" is brighter than the pigment indigo reproduced above. When plotted on the CIE chromaticity diagram, this color is at 435 nanometers, in the middle of the portion of the spectrum traditionally considered indigo, i.e., between 450 and 420 nanometers. This color is only an approximation of spectral indigo, since actual spectral colors are outside the gamut of the sRGB color system.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 27,
"text": "At right is displayed the web color \"blue-violet\", a color intermediate in brightness between electric indigo and pigment indigo. It is also known as \"deep indigo\".",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 28,
"text": "The color box on the right displays the web color indigo, the color indigo as it would be reproduced by artists' paints as opposed to the brighter indigo above (electric indigo) that is possible to reproduce on a computer screen. Its hue is closer to violet than to indigo dye for which the color is named. Pigment indigo can be obtained by mixing 55% pigment cyan with about 45% pigment magenta.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 29,
"text": "Compare the subtractive colors to the additive colors in the two primary color charts in the article on primary colors to see the distinction between electric colors as reproducible from light on a computer screen (additive colors) and the pigment colors reproducible with pigments (subtractive colors); the additive colors are significantly brighter because they are produced from light instead of pigment.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 30,
"text": "Web color indigo represents the way the color indigo was always reproduced in pigments, paints, or colored pencils in the 1950s. By the 1970s, because of the advent of psychedelic art, artists became accustomed to brighter pigments. Pigments called \"bright indigo\" or \"bright blue-violet\" (the pigment equivalent of the electric indigo reproduced in the section above) became available in artists' pigments and colored pencils.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 31,
"text": "'Tropical Indigo' is the color that is called añil in the Guía de coloraciones (Guide to colorations) by Rosa Gallego and Juan Carlos Sanz, a color dictionary published in 2005 that is widely popular in the Hispanophone realm.",
"title": "Distinction among tones of indigo"
},
{
"paragraph_id": 32,
"text": "Marina Warner's novel Indigo (1992) is a retelling of Shakespeare's The Tempest and features the production of indigo dye by Sycorax.",
"title": "In culture"
},
{
"paragraph_id": 33,
"text": "The French Army adopted dark blue indigo at the time of the French Revolution, as a replacement for the white uniforms previously worn by the Royal infantry regiments. In 1806, Napoleon decided to restore the white coats because of shortages of indigo dye imposed by the British continental blockade. However, the greater practicability of the blue color led to its retention, and indigo remained the dominant color of French military coats until 1914.",
"title": "In culture"
},
{
"paragraph_id": 34,
"text": "In the Better Call Saul episode \"Hero\", Howard Hamlin mentions that his law firm Hamlin Hamlin & McGill trademarked a colour called \"Hamlindigo\" whilst confronting Jimmy McGill over trademark infringement in a billboard advertisement he produced for his own legal services.",
"title": "In culture"
},
{
"paragraph_id": 35,
"text": "The spiritualist applications use electric indigo, because the color is positioned between blue and violet on the spectrum.",
"title": "In culture"
}
]
| Indigo is a deep color close to the color wheel blue, as well as to some variants of ultramarine, based on the ancient dye of the same name. The word "indigo" comes from the Latin word indicum, meaning "Indian", as the plant-based dye was originally exported to Europe from India. It is traditionally regarded as a color in the visible spectrum, as well as one of the seven colors of the rainbow: the color between blue and violet; however, sources differ as to its actual position in the electromagnetic spectrum. The first known recorded use of indigo as a color name in English was in 1289. | 2001-11-16T20:12:43Z | 2023-12-23T01:12:17Z | [
"Template:Cite magazine",
"Template:Wikisource inline",
"Template:Short description",
"Template:About",
"Template:Center",
"Template:Cite news",
"Template:Color topics",
"Template:Further",
"Template:Cn",
"Template:Cite web",
"Template:Infobox color",
"Template:Better source",
"Template:Shades of violet",
"Template:Webarchive",
"Template:Shades of blue",
"Template:Main",
"Template:-",
"Template:Cite journal",
"Template:Electromagnetic spectrum",
"Template:Pp-pc1",
"Template:Reflist",
"Template:Dead link",
"Template:Colorsample",
"Template:Cite book",
"Template:ISBN",
"Template:Distinguish",
"Template:Use dmy dates",
"Template:Lang",
"Template:Redirect",
"Template:Clear"
]
| https://en.wikipedia.org/wiki/Indigo |
15,251 | International Monetary Fund | The International Monetary Fund (IMF) is a major financial agency of the United Nations, and an international financial institution funded by 190 member countries, with headquarters in Washington, D.C. It is regarded as the global lender of last resort to national governments, and a leading supporter of exchange-rate stability. Its stated mission is "working to foster global monetary cooperation, secure financial stability, facilitate international trade, promote high employment and sustainable economic growth, and reduce poverty around the world." Established on December 27, 1945 at the Bretton Woods Conference, primarily according to the ideas of Harry Dexter White and John Maynard Keynes, it started with 29 member countries and the goal of reconstructing the international monetary system after World War II. It now plays a central role in the management of balance of payments difficulties and international financial crises. Through a quota system, countries contribute funds to a pool from which countries can borrow if they experience balance of payments problems. As of 2016, the fund had SDR 477 billion (about US$667 billion).
The IMF works to stabilize and foster the economies of its member countries by its use of the fund, as well as other activities such as gathering and analyzing economic statistics and surveillance of its members' economies. IMF funds come from two major sources: quotas and loans. Quotas, which are pooled funds from member nations, generate most IMF funds. The size of members' quotas increase according to their economic and financial importance in the world. The quotas are increased periodically as a means of boosting the IMF's resources in the form of special drawing rights.
The current managing director (MD) and chairwoman of the IMF is Bulgarian economist Kristalina Georgieva, who has held the post since October 1, 2019. Indian-American economist Gita Gopinath, previously the chief economist, was appointed as first deputy managing director, effective January 21, 2022. Pierre-Olivier Gourinchas was appointed chief economist on January 24, 2022.
According to the IMF itself, it works to foster global growth and economic stability by providing policy advice and financing the members by working with developing countries to help them achieve macroeconomic stability and reduce poverty. The rationale for this is that private international capital markets function imperfectly and many countries have limited access to financial markets. Such market imperfections, together with balance-of-payments financing, provide the justification for official financing, without which many countries could only correct large external payment imbalances through measures with adverse economic consequences. The IMF provides alternate sources of financing such as the Poverty Reduction and Growth Facility.
Upon the founding of the IMF, its three primary functions were:
The IMF's role was fundamentally altered by the floating exchange rates after 1971. It shifted to examining the economic policies of countries with IMF loan agreements to determine whether a shortage of capital was due to economic fluctuations or economic policy. The IMF also researched what types of government policy would ensure economic recovery. A particular concern of the IMF was to prevent financial crises, such as those in Mexico in 1982, Brazil in 1987, East Asia in 1997–98, and Russia in 1998, from spreading and threatening the entire global financial and currency system. The challenge was to promote and implement a policy that reduced the frequency of crises among emerging market countries, especially the middle-income countries which are vulnerable to massive capital outflows. Rather than maintaining a position of oversight of only exchange rates, their function became one of surveillance of the overall macroeconomic performance of member countries. Their role became a lot more active because the IMF now manages economic policy rather than just exchange rates.
In addition, the IMF negotiates conditions on lending and loans under their policy of conditionality, which was established in the 1950s. Low-income countries can borrow on concessional terms, which means there is a period of time with no interest rates, through the Extended Credit Facility (ECF), the Standby Credit Facility (SCF) and the Rapid Credit Facility (RCF). Non-concessional loans, which include interest rates, are provided mainly through the Stand-By Arrangements (SBA), the Flexible Credit Line (FCL), the Precautionary and Liquidity Line (PLL), and the Extended Fund Facility. The IMF provides emergency assistance via the Rapid Financing Instrument (RFI) to members facing urgent balance-of-payments needs.
The IMF is mandated to oversee the international monetary and financial system and monitor the economic and financial policies of its member countries. This activity is known as surveillance and facilitates international co-operation. Since the demise of the Bretton Woods system of fixed exchange rates in the early 1970s, surveillance has evolved largely by way of changes in procedures rather than through the adoption of new obligations. The responsibilities changed from those of guardians to those of overseers of members' policies.
The Fund typically analyses the appropriateness of each member country's economic and financial policies for achieving orderly economic growth, and assesses the consequences of these policies for other countries and for the global economy. For instance, The IMF played a significant role in individual countries, such as Armenia and Belarus, in providing financial support to achieve stabilization financing from 2009 to 2019. The maximum sustainable debt level of a polity, which is watched closely by the IMF, was defined in 2011 by IMF economists to be 120%. Indeed, it was at this number that the Greek economy melted down in 2010.
In 1995, the International Monetary Fund began to work on data dissemination standards with the view of guiding IMF member countries to disseminate their economic and financial data to the public. The International Monetary and Financial Committee (IMFC) endorsed the guidelines for the dissemination standards and they were split into two tiers: The General Data Dissemination System (GDDS) and the Special Data Dissemination Standard (SDDS).
The executive board approved the SDDS and GDDS in 1996 and 1997, respectively, and subsequent amendments were published in a revised Guide to the General Data Dissemination System. The system is aimed primarily at statisticians and aims to improve many aspects of statistical systems in a country. It is also part of the World Bank Millennium Development Goals (MDG) and Poverty Reduction Strategic Papers (PRSPs).
The primary objective of the GDDS is to encourage member countries to build a framework to improve data quality and statistical capacity building to evaluate statistical needs, set priorities in improving timeliness, transparency, reliability, and accessibility of financial and economic data. Some countries initially used the GDDS, but later upgraded to SDDS.
Some entities that are not IMF members also contribute statistical data to the systems:
A 2021 study found that the IMF's surveillance activities have "a substantial impact on sovereign debt with much greater impacts in emerging than high-income economies".
World Economic Outlook is a survey, published twice a year, by International Monetary Fund staff, which analyzes the global economy in the near and medium term.
IMF conditionality is a set of policies or conditions that the IMF requires in exchange for financial resources. The IMF does require collateral from countries for loans but also requires the government seeking assistance to correct its macroeconomic imbalances in the form of policy reform. If the conditions are not met, the funds are withheld. The concept of conditionality was introduced in a 1952 executive board decision and later incorporated into the Articles of Agreement.
Conditionality is associated with economic theory as well as an enforcement mechanism for repayment. Stemming primarily from the work of Jacques Polak, the theoretical underpinning of conditionality was the "monetary approach to the balance of payments".
Some of the conditions for structural adjustment can include:
These conditions are known as the Washington Consensus.
These loan conditions ensure that the borrowing country will be able to repay the IMF and that the country will not attempt to solve their balance-of-payment problems in a way that would negatively impact the international economy. The incentive problem of moral hazard—when economic agents maximise their own utility to the detriment of others because they do not bear the full consequences of their actions—is mitigated through conditions rather than providing collateral; countries in need of IMF loans do not generally possess internationally valuable collateral anyway.
Conditionality also reassures the IMF that the funds lent to them will be used for the purposes defined by the Articles of Agreement and provides safeguards that the country will be able to rectify its macroeconomic and structural imbalances. In the judgment of the IMF, the adoption by the member of certain corrective measures or policies will allow it to repay the IMF, thereby ensuring that the resources will be available to support other members.
As of 2004, borrowing countries have had a good track record for repaying credit extended under the IMF's regular lending facilities with full interest over the duration of the loan. This indicates that IMF lending does not impose a burden on creditor countries, as lending countries receive market-rate interest on most of their quota subscription, plus any of their own-currency subscriptions that are loaned out by the IMF, plus all of the reserve assets that they provide the IMF.
The IMF was originally laid out as a part of the Bretton Woods system exchange agreement in 1944. During the Great Depression, countries sharply raised barriers to trade in an attempt to improve their failing economies. This led to the devaluation of national currencies and a decline in world trade.
This breakdown in international monetary cooperation created a need for oversight. The representatives of 45 governments met at the Bretton Woods Conference in the Mount Washington Hotel in Bretton Woods, New Hampshire, in the United States, to discuss a framework for postwar international economic cooperation and how to rebuild Europe.
There were two views on the role the IMF should assume as a global economic institution. American delegate Harry Dexter White foresaw an IMF that functioned more like a bank, making sure that borrowing states could repay their debts on time. Most of White's plan was incorporated into the final acts adopted at Bretton Woods. British economist John Maynard Keynes, on the other hand, imagined that the IMF would be a cooperative fund upon which member states could draw to maintain economic activity and employment through periodic crises. This view suggested an IMF that helped governments and act as the United States government had during the New Deal to the great depression of the 1930s.
The IMF formally came into existence on 27 December 1945, when the first 29 countries ratified its Articles of Agreement. By the end of 1946 the IMF had grown to 39 members. On 1 March 1947, the IMF began its financial operations, and on 8 May France became the first country to borrow from it.
The IMF was one of the key organizations of the international economic system; its design allowed the system to balance the rebuilding of international capitalism with the maximization of national economic sovereignty and human welfare, also known as embedded liberalism. The IMF's influence in the global economy steadily increased as it accumulated more members. Its membership began to expand in the late 1950s and during the 1960s as many African countries became independent and applied for membership. But the Cold War limited the Fund's membership, with most countries in the Soviet sphere of influence not joining until 1970s and 1980s.
The Bretton Woods exchange rate system prevailed until 1971 when the United States government suspended the convertibility of the US$ (and dollar reserves held by other governments) into gold. This is known as the Nixon Shock. The changes to the IMF articles of agreement reflecting these changes were ratified in 1976 by the Jamaica Accords. Later in the 1970s, large commercial banks began lending to states because they were awash in cash deposited by oil exporters. The lending of the so-called money center banks led to the IMF changing its role in the 1980s after a world recession provoked a crisis that brought the IMF back into global financial governance.
In the mid-1980s, the IMF shifted its narrow focus from currency stabilization to a broader focus of promoting market-liberalizing reforms through structural adjustment programs. This shift occurred without a formal renegotiation of the organization's charter or operational guidelines. The Ronald Reagan administration, in particular Treasury Secretary James Baker, his assistant secretary David Mulford and deputy assistant secretary Charles Dallara, pressured the IMF to attach market-liberal reforms to the organization's conditional loans.
During the 20th century, the IMF shifted its position on capital controls. Whereas the IMF permitted capital controls at its founding and throughout the 1970s, IMF staff increasingly favored free capital movement from 1980s onwards. This shift happened in the aftermath of an emerging consensus in economics on the desirability of free capital movement, retirement of IMF staff hired in the 1940s and 1950s, and the recruitment of staff exposed to new thinking in economics.
The IMF provided two major lending packages in the early 2000s to Argentina (during the 1998–2002 Argentine great depression) and Uruguay (after the 2002 Uruguay banking crisis). However, by the mid-2000s, IMF lending was at its lowest share of world GDP since the 1970s.
In May 2010, the IMF participated, in 3:11 proportion, in the first Greek bailout that totaled €110 billion, to address the great accumulation of public debt, caused by continuing large public sector deficits. As part of the bailout, the Greek government agreed to adopt austerity measures that would reduce the deficit from 11% in 2009 to "well below 3%" in 2014. The bailout did not include debt restructuring measures such as a haircut, to the chagrin of the Swiss, Brazilian, Indian, Russian, and Argentinian Directors of the IMF, with the Greek authorities themselves (at the time, PM George Papandreou and Finance Minister Giorgos Papakonstantinou) ruling out a haircut.
A second bailout package of more than €100 billion was agreed upon over the course of a few months from October 2011, during which time Papandreou was forced from office. The so-called Troika, of which the IMF is part, are joint managers of this programme, which was approved by the executive directors of the IMF on 15 March 2012 for XDR 23.8 billion and saw private bondholders take a haircut of upwards of 50%. In the interval between May 2010 and February 2012 the private banks of Holland, France, and Germany reduced exposure to Greek debt from €122 billion to €66 billion.
As of January 2012, the largest borrowers from the IMF in order were Greece, Portugal, Ireland, Romania, and Ukraine.
On 25 March 2013, a €10 billion international bailout of Cyprus was agreed by the Troika, at the cost to the Cypriots of its agreement: to close the country's second-largest bank; to impose a one-time bank deposit levy on Bank of Cyprus uninsured deposits. No insured deposit of €100k or less were to be affected under the terms of a novel bail-in scheme.
The topic of sovereign debt restructuring was taken up by the IMF in April 2013, for the first time since 2005, in a report entitled "Sovereign Debt Restructuring: Recent Developments and Implications for the Fund's Legal and Policy Framework". The paper, which was discussed by the board on 20 May, summarised the recent experiences in Greece, St Kitts and Nevis, Belize, and Jamaica. An explanatory interview with deputy director Hugh Bredenkamp was published a few days later, as was a deconstruction by Matina Stevis of The Wall Street Journal.
In the October 2013, Fiscal Monitor publication, the IMF suggested that a capital levy capable of reducing Euro-area government debt ratios to "end-2007 levels" would require a very high tax rate of about 10%.
The Fiscal Affairs department of the IMF, headed at the time by Acting Director Sanjeev Gupta, produced a January 2014 report entitled "Fiscal Policy and Income Inequality" that stated that "Some taxes levied on wealth, especially on immovable property, are also an option for economies seeking more progressive taxation ... Property taxes are equitable and efficient, but underutilized in many economies ... There is considerable scope to exploit this tax more fully, both as a revenue source and as a redistributive instrument."
At the end of March 2014, the IMF secured an $18 billion bailout fund for the provisional government of Ukraine in the aftermath of the Revolution of Dignity.
In late 2019, the IMF estimated global growth in 2020 to reach 3.4%, but due to the coronavirus, in November 2020, it expected the global economy to shrink by 4.4%.
In March 2020, Kristalina Georgieva announced that the IMF stood ready to mobilize $1 trillion as its response to the COVID-19 pandemic. This was in addition to the $50 billion fund it had announced two weeks earlier, of which $5 billion had already been requested by Iran. One day earlier on 11 March, the UK called to pledge £150 million to the IMF catastrophe relief fund. It came to light on 27 March that "more than 80 poor and middle-income countries" had sought a bailout due to the coronavirus.
On 13 April 2020, the IMF said that it "would provide immediate debt relief to 25 member countries under its Catastrophe Containment and Relief Trust (CCRT)" programme.
Not all member countries of the IMF are sovereign states, and therefore not all "member countries" of the IMF are members of the United Nations. Amidst "member countries" of the IMF that are not member states of the UN are non-sovereign areas with special jurisdictions that are officially under the sovereignty of full UN member states, such as Aruba, Curaçao, Hong Kong, and Macao, as well as Kosovo. The corporate members appoint ex-officio voting members, who are listed below. All members of the IMF are also International Bank for Reconstruction and Development (IBRD) members and vice versa.
Former members are Cuba (which left in 1964), and Taiwan, which was ejected from the IMF in 1980 after losing the support of the then United States President Jimmy Carter and was replaced by the People's Republic of China. However, "Taiwan Province of China" is still listed in the official IMF indices.
Apart from Cuba, the other UN states that do not belong to the IMF are Liechtenstein, Monaco and North Korea. However, Andorra became the 190th member on 16 October 2020.
Poland withdrew in 1950—allegedly pressured by the Soviet Union—but returned in 1986. The former Czechoslovakia was expelled in 1954 for "failing to provide required data" and was readmitted in 1990, after the Velvet Revolution.
Any country may apply to be a part of the IMF. Post-IMF formation, in the early postwar period, rules for IMF membership were left relatively loose. Members needed to make periodic membership payments towards their quota, to refrain from currency restrictions unless granted IMF permission, to abide by the Code of Conduct in the IMF Articles of Agreement, and to provide national economic information. However, stricter rules were imposed on governments that applied to the IMF for funding.
The countries that joined the IMF between 1945 and 1971 agreed to keep their exchange rates secured at rates that could be adjusted only to correct a "fundamental disequilibrium" in the balance of payments, and only with the IMF's agreement.
Member countries of the IMF have access to information on the economic policies of all member countries, the opportunity to influence other members' economic policies, technical assistance in banking, fiscal affairs, and exchange matters, financial support in times of payment difficulties, and increased opportunities for trade and investment.
The board of governors consists of one governor and one alternate governor for each member country. Each member country appoints its two governors. The Board normally meets once a year and is responsible for electing or appointing an executive director to the executive board. While the board of governors is officially responsible for approving quota increases, special drawing right allocations, the admittance of new members, compulsory withdrawal of members, and amendments to the Articles of Agreement and By-Laws, in practice it has delegated most of its powers to the IMF's executive board.
The board of governors is advised by the International Monetary and Financial Committee and the Development Committee. The International Monetary and Financial Committee has 24 members and monitors developments in global liquidity and the transfer of resources to developing countries. The Development Committee has 25 members and advises on critical development issues and on financial resources required to promote economic development in developing countries.
The board of governors reports directly to the managing director of the IMF, Kristalina Georgieva.
24 Executive Directors make up the executive board. The executive directors represent all 189 member countries in a geographically based roster. Countries with large economies have their own executive director, but most countries are grouped in constituencies representing four or more countries.
Following the 2008 Amendment on Voice and Participation which came into effect in March 2011, seven countries each appoint an executive director: the United States, Japan, China, Germany, France, the United Kingdom, and Saudi Arabia. The remaining 17 Directors represent constituencies consisting of 2 to 23 countries. This Board usually meets several times each week. The board membership and constituency is scheduled for periodic review every eight years.
The IMF is led by a managing director, who is head of the staff and serves as chairman of the executive board. The managing director is the most powerful position at the IMF. Historically, the IMF's managing director has been a European citizen and the president of the World Bank has been an American citizen. However, this standard is increasingly being questioned and competition for these two posts may soon open up to include other qualified candidates from any part of the world. In August 2019, the International Monetary Fund has removed the age limit which is 65 or over for its managing director position.
In 2011, the world's largest developing countries, the BRIC states, issued a statement declaring that the tradition of appointing a European as managing director undermined the legitimacy of the IMF and called for the appointment to be merit-based.
Former managing director Dominique Strauss-Kahn was arrested in connection with charges of sexually assaulting a New York hotel room attendant and resigned on 18 May. The charges were later dropped. On 28 June 2011 Christine Lagarde was confirmed as managing director of the IMF for a five-year term starting on 5 July 2011. She was re-elected by consensus for a second five-year term, starting 5 July 2016, being the only candidate nominated for the post of managing director.
The managing director is assisted by a First Deputy managing director (FDMD) who, by convention, has always been a citizen of the United States. Together, the managing director and their First Deputy lead the senior management of the IMF. Like the managing director, the First Deputy traditionally serves a five-year term.
The chief economist leads the research division of the IMF and is a "senior official" of the IMF.
IMF staff have considerable autonomy and are known to shape IMF policy. According to Jeffrey Chwieroth, "It is the staff members who conduct the bulk of the IMF's tasks; they formulate policy proposals for consideration by member states, exercise surveillance, carry out loan negotiations and design the programs, and collect and systematize detailed information." Most IMF staff are economists. According to a 1968 study, nearly 60% of staff were from English-speaking developed countries. By 2004, between 40 and 50% of staff were from English-speaking developed countries.
A 1996 study found that 90% of new staff with a PhD obtained them from universities in the United States or Canada. A 1999 study found that none of the new staff with a PhD obtained their PhD in the Global South.
Voting power in the IMF is based on a quota system. Each member has a number of basic votes, equal to 5.502% of the total votes, plus one additional vote for each special drawing right (SDR) of 100,000 of a member country's quota. The SDR is the unit of account of the IMF and represents a potential claim to currency. It is based on a basket of key international currencies. The basic votes generate a slight bias in favour of small countries, but the additional votes determined by SDR outweigh this bias. Changes in the voting shares require approval by a super-majority of 85% of voting power.
In December 2015, the United States Congress adopted a legislation authorising the 2010 Quota and Governance Reforms. As a result,
The IMF's quota system was created to raise funds for loans. Each IMF member country is assigned a quota, or contribution, that reflects the country's relative size in the global economy. Each member's quota also determines its relative voting power. Thus, financial contributions from member governments are linked to voting power in the organization.
This system follows the logic of a shareholder-controlled organization: wealthy countries have more say in the making and revision of rules. Since decision making at the IMF reflects each member's relative economic position in the world, wealthier countries that provide more money to the IMF have more influence than poorer members that contribute less; nonetheless, the IMF focuses on redistribution.
Quotas are normally reviewed every five years and can be increased when deemed necessary by the board of governors. IMF voting shares are relatively inflexible: countries that grow economically have tended to become under-represented as their voting power lags behind. Currently, reforming the representation of developing countries within the IMF has been suggested. These countries' economies represent a large portion of the global economic system but this is not reflected in the IMF's decision-making process through the nature of the quota system. Joseph Stiglitz argues, "There is a need to provide more effective voice and representation for developing countries, which now represent a much larger portion of world economic activity since 1944, when the IMF was created." In 2008, a number of quota reforms were passed including shifting 6% of quota shares to dynamic emerging markets and developing countries.
The IMF's membership is divided along income lines: certain countries provide financial resources while others use these resources. Both developed country "creditors" and developing country "borrowers" are members of the IMF. The developed countries provide the financial resources but rarely enter into IMF loan agreements; they are the creditors. Conversely, the developing countries use the lending services but contribute little to the pool of money available to lend because their quotas are smaller; they are the borrowers. Thus, tension is created around governance issues because these two groups, creditors and borrowers, have fundamentally different interests.
The criticism is that the system of voting power distribution through a quota system institutionalizes borrower subordination and creditor dominance. The resulting division of the IMF's membership into borrowers and non-borrowers has increased the controversy around conditionality because the borrowers are interested in increasing loan access while creditors want to maintain reassurance that the loans will be repaid.
A recent source revealed that the average overall use of IMF credit per decade increased, in real terms, by 21% between the 1970s and 1980s, and increased again by just over 22% from the 1980s to the 1991–2005 period. Another study has suggested that since 1950 the continent of Africa alone has received $300 billion from the IMF, the World Bank, and affiliate institutions.
A study by Bumba Mukherjee found that developing democratic countries benefit more from IMF programs than developing autocratic countries because policy-making, and the process of deciding where loaned money is used, is more transparent within a democracy. One study done by Randall Stone found that although earlier studies found little impact of IMF programs on balance of payments, more recent studies using more sophisticated methods and larger samples "usually found IMF programs improved the balance of payments".
The Exceptional Access Framework was created in 2003 when John B. Taylor was Under Secretary of the US Treasury for International Affairs. The new Framework became fully operational in February 2003 and it was applied in the subsequent decisions on Argentina and Brazil. Its purpose was to place some sensible rules and limits on the way the IMF makes loans to support governments with debt problem—especially in emerging markets—and thereby move away from the bailout mentality of the 1990s. Such a reform was essential for ending the crisis atmosphere that then existed in emerging markets. The reform was closely related to and put in place nearly simultaneously with the actions of several emerging market countries to place collective action clauses in their bond contracts.
In 2010, the framework was abandoned so the IMF could make loans to Greece in an unsustainable and political situation.
The topic of sovereign debt restructuring was taken up by IMF staff in April 2013 for the first time since 2005, in a report entitled "Sovereign Debt Restructuring: Recent Developments and Implications for the Fund's Legal and Policy Framework". The paper, which was discussed by the board on 20 May, summarised the recent experiences in Greece, St Kitts and Nevis, Belize, and Jamaica. An explanatory interview with deputy director Hugh Bredenkamp was published a few days later, as was a deconstruction by Matina Stevis of The Wall Street Journal.
The staff was directed to formulate an updated policy, which was accomplished on 22 May 2014 with a report entitled "The Fund's Lending Framework and Sovereign Debt: Preliminary Considerations", and taken up by the executive board on 13 June. The staff proposed that "in circumstances where a (Sovereign) member has lost market access and debt is considered sustainable ... the IMF would be able to provide Exceptional Access on the basis of a debt operation that involves an extension of maturities", which was labeled a "reprofiling operation". These reprofiling operations would "generally be less costly to the debtor and creditors—and thus to the system overall—relative to either an upfront debt reduction operation or a bail-out that is followed by debt reduction ... (and) would be envisaged only when both (a) a member has lost market access and (b) debt is assessed to be sustainable, but not with high probability ... Creditors will only agree if they understand that such an amendment is necessary to avoid a worse outcome: namely, a default and/or an operation involving debt reduction ... Collective action clauses, which now exist in most—but not all—bonds would be relied upon to address collective action problems."
According to a 2002 study by Randall W. Stone, the academic literature on the IMF shows "no consensus on the long-term effects of IMF programs on growth".
Some research has found that IMF loans can reduce the chance of a future banking crisis, while other studies have found that they can increase the risk of political crises. IMF programs can reduce the effects of a currency crisis.
Some research has found that IMF programs are less effective in countries which possess a developed-country patron (be it by foreign aid, membership of postcolonial institutions or UN voting patterns), seemingly due to this patron allowing countries to flaunt IMF program rules as these rules are not consistently enforced. Some research has found that IMF loans reduce economic growth due to creating an economic moral hazard, reducing public investment, reducing incentives to create a robust domestic policies and reducing private investor confidence. Other research has indicated that IMF loans can have a positive impact on economic growth and that their effects are highly nuanced.
Overseas Development Institute (ODI) research undertaken in 1980 included criticisms of the IMF which support the analysis that it is a pillar of what activist Titus Alexander calls global apartheid.
ODI conclusions were that the IMF's very nature of promoting market-oriented approaches attracted unavoidable criticism. On the other hand, the IMF could serve as a scapegoat while allowing governments to blame international bankers. The ODI conceded that the IMF was insensitive to political aspirations of LDCs while its policy conditions were inflexible.
Argentina, which had been considered by the IMF to be a model country in its compliance to policy proposals by the Bretton Woods institutions, experienced a catastrophic economic crisis in 2001, which some believe to have been caused by IMF-induced budget restrictions—which undercut the government's ability to sustain national infrastructure even in crucial areas such as health, education, and security—and privatisation of strategically vital national resources. Others attribute the crisis to Argentina's misdesigned fiscal federalism, which caused subnational spending to increase rapidly. The crisis added to widespread hatred of this institution in Argentina and other South American countries, with many blaming the IMF for the region's economic problems. The current—as of early 2006—trend toward moderate left-wing governments in the region and a growing concern with the development of a regional economic policy largely independent of big business pressures has been ascribed to this crisis.
In 2006, a senior ActionAid policy analyst Akanksha Marphatia stated that IMF policies in Africa undermine any possibility of meeting the Millennium Development Goals (MDGs) due to imposed restrictions that prevent spending on important sectors, such as education and health.
In an interview (2008-05-19), the former Romanian Prime Minister Călin Popescu-Tăriceanu claimed that "Since 2005, IMF is constantly making mistakes when it appreciates the country's economic performances". Former Tanzanian President Julius Nyerere, who claimed that debt-ridden African states were ceding sovereignty to the IMF and the World Bank, famously asked, "Who elected the IMF to be the ministry of finance for every country in the world?"
Former chief economist of IMF and former Reserve Bank of India (RBI) Governor Raghuram Rajan who predicted the financial crisis of 2007–08 criticised the IMF for remaining a sideline player to the developed world. He criticised the IMF for praising the monetary policies of the US, which he believed were wreaking havoc in emerging markets. He had been critical of "ultra-loose money policies" of some unnamed countries.
Countries such as Zambia have not received proper aid with long-lasting effects, leading to concern from economists. Since 2005, Zambia (as well as 29 other African countries) did receive debt write-offs, which helped with the country's medical and education funds. However, Zambia returned to a debt of over half its GDP in less than a decade. American economist William Easterly, sceptical of the IMF's methods, had initially warned that "debt relief would simply encourage more reckless borrowing by crooked governments unless it was accompanied by reforms to speed up economic growth and improve governance", according to The Economist.
The IMF has been criticised for being "out of touch" with local economic conditions, cultures, and environments in the countries they are requiring policy reform. The economic advice the IMF gives might not always take into consideration the difference between what spending means on paper and how it is felt by citizens. Countries charge that with excessive conditionality, they do not "own" the programmes and the links are broken between a recipient country's people, its government, and the goals being pursued by the IMF.
Jeffrey Sachs argues that the IMF's "usual prescription is 'budgetary belt tightening to countries who are much too poor to own belts'". Sachs wrote that the IMF's role as a generalist institution specialising in macroeconomic issues needs reform. Conditionality has also been criticised because a country can pledge collateral of "acceptable assets" to obtain waivers—if one assumes that all countries are able to provide "acceptable collateral".
One view is that conditionality undermines domestic political institutions. The recipient governments are sacrificing policy autonomy in exchange for funds, which can lead to public resentment of the local leadership for accepting and enforcing the IMF conditions. Political instability can result from more leadership turnover as political leaders are replaced in electoral backlashes. IMF conditions are often criticised for reducing government services, thus increasing unemployment.
Another criticism is that IMF policies are only designed to address poor governance, excessive government spending, excessive government intervention in markets, and too much state ownership. This assumes that this narrow range of issues represents the only possible problems; everything is standardised and differing contexts are ignored. A country may also be compelled to accept conditions it would not normally accept had they not been in a financial crisis in need of assistance.
On top of that, regardless of what methodologies and data sets used, it comes to same the conclusion of exacerbating income inequality. With Gini coefficient, it became clear that countries with IMF policies face increased income inequality.
It is claimed that conditionalities retard social stability and hence inhibit the stated goals of the IMF, while Structural Adjustment Programmes lead to an increase in poverty in recipient countries. The IMF sometimes advocates "austerity programmes", cutting public spending and increasing taxes even when the economy is weak, to bring budgets closer to a balance, thus reducing budget deficits. Countries are often advised to lower their corporate tax rate. In Globalization and Its Discontents, Joseph E. Stiglitz, former chief economist and senior vice-president at the World Bank, criticises these policies. He argues that by converting to a more monetarist approach, the purpose of the fund is no longer valid, as it was designed to provide funds for countries to carry out Keynesian reflations, and that the IMF "was not participating in a conspiracy, but it was reflecting the interests and ideology of the Western financial community."
Stiglitz concludes, "Modern high-tech warfare is designed to remove physical contact: dropping bombs from 50,000 feet ensures that one does not 'feel' what one does. Modern economic management is similar: from one's luxury hotel, one can callously impose policies about which one would think twice if one knew the people whose lives one was destroying."
The researchers Eric Toussaint and Damien Millet argue that the IMF's policies amount to a new form of colonisation that does not need a military presence:
Following the exigencies of the governments of the richest companies, the IMF, permitted countries in crisis to borrow in order to avoid default on their repayments. Caught in the debt's downward spiral, developing countries soon had no other recourse than to take on new debt in order to repay the old debt. Before providing them with new loans, at higher interest rates, future leaders asked the IMF, to intervene with the guarantee of ulterior reimbursement, asking for a signed agreement with the said countries. The IMF thus agreed to restart the flow of the 'finance pump' on condition that the concerned countries first use this money to reimburse banks and other private lenders, while restructuring their economy at the IMF's discretion: these were the famous conditionalities, detailed in the Structural Adjustment Programmes. The IMF and its ultra-liberal experts took control of the borrowing countries' economic policies. A new form of colonisation was thus instituted. It was not even necessary to establish an administrative or military presence; the debt alone maintained this new form of submission.
International politics play an important role in IMF decision making. The clout of member states is roughly proportional to its contribution to IMF finances. The United States has the greatest number of votes and therefore wields the most influence. Domestic politics often come into play, with politicians in developing countries using conditionality to gain leverage over the opposition to influence policy.
In 2016, the IMF's research department published a report titled "Neoliberalism: Oversold?" which, while praising some aspects of the "neoliberal agenda", claims that the organisation has been "overselling" fiscal austerity policies and financial deregulation, which they claim has exacerbated both financial crises and economic inequality around the world.
In 2020 and 2021, Oxfam criticized the IMF for forcing tough austerity measures on many low income countries during the COVID-19 pandemic, despite forcing cuts to healthcare spending, would hamper the recipient's response to the pandemic.
The role of the Bretton Woods institutions has been controversial since the late Cold War, because of claims that the IMF policy makers supported military dictatorships friendly to American and European corporations, but also other anti-communist and Communist regimes (such as Mobutu's Zaire and Ceaușescu's Romania, respectively). Critics also claim that the IMF is generally apathetic or hostile to human rights, and labour rights. The controversy has helped spark the anti-globalization movement.
An example of IMF's support for a dictatorship was its ongoing support for Mobutu's rule in Zaire, although its own envoy, Erwin Blumenthal, provided a sobering report about the entrenched corruption and embezzlement and the inability of the country to pay back any loans.
Arguments in favour of the IMF say that economic stability is a precursor to democracy; however, critics highlight various examples in which democratised countries fell after receiving IMF loans.
A 2017 study found no evidence of IMF lending programs undermining democracy in borrowing countries. To the contrary, it found "evidence for modest but definitively positive conditional differences in the democracy scores of participating and non-participating countries".
On 28 June 2021, the IMF approved a US$1 billion loan to the Ugandan government despite protests from Ugandans in Washington, London and South Africa.
A number of civil society organisations have criticised the IMF's policies for their impact on access to food, particularly in developing countries. In October 2008, former United States president Bill Clinton delivered a speech to the United Nations on World Food Day, criticising the World Bank and IMF for their policies on food and agriculture:
We need the World Bank, the IMF, all the big foundations, and all the governments to admit that, for 30 years, we all blew it, including me when I was president. We were wrong to believe that food was like some other product in international trade, and we all have to go back to a more responsible and sustainable form of agriculture.
The FPIF remarked that there is a recurring pattern: "the destabilization of peasant producers by a one-two punch of IMF-World Bank structural adjustment programs that gutted government investment in the countryside followed by the massive influx of subsidized U.S. and European Union agricultural imports after the WTO's Agreement on Agriculture pried open markets."
A 2009 study concluded that the strict conditions resulted in thousands of deaths in Eastern Europe by tuberculosis as public health care had to be weakened. In the 21 countries to which the IMF had given loans, tuberculosis deaths rose by 16.6%. A 2017 systematic review on studies conducted on the impact that Structural adjustment programs have on child and maternal health found that these programs have a detrimental effect on maternal and child health among other adverse effects.
The IMF is only one of many international organisations, and it is a generalist institution that deals only with macroeconomic issues; its core areas of concern in developing countries are very narrow. One proposed reform is a movement towards close partnership with other specialist agencies such as UNICEF, the Food and Agriculture Organization (FAO), and the United Nations Development Program (UNDP).
Jeffrey Sachs argues in The End of Poverty that the IMF and the World Bank have "the brightest economists and the lead in advising poor countries on how to break out of poverty, but the problem is development economics". Development economics needs the reform, not the IMF. He also notes that IMF loan conditions should be paired with other reforms—e.g., trade reform in developed nations, debt cancellation, and increased financial assistance for investments in basic infrastructure. IMF loan conditions cannot stand alone and produce change; they need to be partnered with other reforms or other conditions as applicable.
The scholarly consensus is that IMF decision-making is not simply technocratic, but also guided by political and economic concerns. The United States is the IMF's most powerful member, and its influence reaches even into decision-making concerning individual loan agreements. The U.S. has historically been openly opposed to losing what Treasury Secretary Jacob Lew described in 2015 as its "leadership role" at the IMF, and the U.S.' "ability to shape international norms and practices".
Emerging markets were not well-represented for most of the IMF's history: Despite being the most populous country, China's vote share was the sixth largest; Brazil's vote share was smaller than Belgium's. Reforms to give more powers to emerging economies were agreed by the G20 in 2010. The reforms could not pass, however, until they were ratified by the United States Congress, since 85% of the Fund's voting power was required for the reforms to take effect, and the Americans held more than 16% of voting power at the time. After repeated criticism, the U.S. finally ratified the voting reforms at the end of 2015. The OECD countries maintained their overwhelming majority of voting share, and the U.S. in particular retained its share at over 16%.
The criticism of the American- and European-dominated IMF has led to what some consider "disenfranchising the world" from the governance of the IMF. Raúl Prebisch, the founding secretary-general of the UN Conference on Trade and Development (UNCTAD), wrote that one of "the conspicuous deficiencies of the general economic theory, from the point of view of the periphery, is its false sense of universality".
Globalization encompasses three institutions: global financial markets and transnational companies, national governments linked to each other in economic and military alliances led by the United States, and rising "global governments" such as World Trade Organization (WTO), IMF, and World Bank. Charles Derber argues in his book People Before Profit, "These interacting institutions create a new global power system where sovereignty is globalized, taking power and constitutional authority away from nations and giving it to global markets and international bodies". Titus Alexander argues that this system institutionalises global inequality between western countries and the Majority World in a form of global apartheid, in which the IMF is a key pillar.
The establishment of globalised economic institutions has been both a symptom of and a stimulus for globalisation. The development of the World Bank, the IMF, regional development banks such as the European Bank for Reconstruction and Development (EBRD), and multilateral trade institutions such as the WTO signals a move away from the dominance of the state as the primary actor analysed in international affairs. Globalization has thus been transformative in terms of limiting of state sovereignty over the economy.
In April 2023, the IMF launched their international central bank digital currency through their Digital Currency Monetary Authority, it will be called the Universal Monetary Unit, or Units for shorthand. The ANSI character will be Ü and will be used to facilitate international banking and international trade between countries and currencies. It will help facilitate SWIFT transactions on cross border transactions at wholesale FX rates instantaneously with real-time settlements. In June, it announced it was working on a platform for central bank digital currencies (CBDCs) that would enable transctions between nations. IMF Managing Director Kristalina Georgieva said that if central banks did not agree on a common platform, cryptocurrencies would fill the resulting vacuum.
Managing Director Lagarde (2011–2019) was convicted of giving preferential treatment to businessman-turned-politician Bernard Tapie as he pursued a legal challenge against the French government. At the time, Lagarde was the French economic minister. Within hours of her conviction, in which she escaped any punishment, the fund's 24-member executive board put to rest any speculation that she might have to resign, praising her "outstanding leadership" and the "wide respect" she commands around the world.
Former IMF Managing Director Rodrigo Rato was arrested in 2015 for alleged fraud, embezzlement and money laundering. In 2017, the Audiencia Nacional found Rato guilty of embezzlement and sentenced him to 4+1⁄2 years' imprisonment. In 2018, the sentence was confirmed by the Supreme Court of Spain.
In March 2011, the Ministers of Economy and Finance of the African Union proposed to establish an African Monetary Fund.
At the 6th BRICS summit in July 2014 the BRICS nations (Brazil, Russia, India, China, and South Africa) announced the BRICS Contingent Reserve Arrangement (CRA) with an initial size of US$100 billion, a framework to provide liquidity through currency swaps in response to actual or potential short-term balance-of-payments pressures.
In 2014, the China-led Asian Infrastructure Investment Bank was established.
Life and Debt, a documentary film, deals with the IMF's policies' influence on Jamaica and its economy from a critical point of view. Debtocracy, a 2011 independent Greek documentary film, also criticises the IMF. Portuguese musician José Mário Branco's 1982 album FMI is inspired by the IMF's intervention in Portugal through monitored stabilisation programs in 1977–78. In the 2015 film Our Brand Is Crisis, the IMF is mentioned as a point of political contention, where the Bolivian population fears its electoral interference. | [
{
"paragraph_id": 0,
"text": "The International Monetary Fund (IMF) is a major financial agency of the United Nations, and an international financial institution funded by 190 member countries, with headquarters in Washington, D.C. It is regarded as the global lender of last resort to national governments, and a leading supporter of exchange-rate stability. Its stated mission is \"working to foster global monetary cooperation, secure financial stability, facilitate international trade, promote high employment and sustainable economic growth, and reduce poverty around the world.\" Established on December 27, 1945 at the Bretton Woods Conference, primarily according to the ideas of Harry Dexter White and John Maynard Keynes, it started with 29 member countries and the goal of reconstructing the international monetary system after World War II. It now plays a central role in the management of balance of payments difficulties and international financial crises. Through a quota system, countries contribute funds to a pool from which countries can borrow if they experience balance of payments problems. As of 2016, the fund had SDR 477 billion (about US$667 billion).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The IMF works to stabilize and foster the economies of its member countries by its use of the fund, as well as other activities such as gathering and analyzing economic statistics and surveillance of its members' economies. IMF funds come from two major sources: quotas and loans. Quotas, which are pooled funds from member nations, generate most IMF funds. The size of members' quotas increase according to their economic and financial importance in the world. The quotas are increased periodically as a means of boosting the IMF's resources in the form of special drawing rights.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The current managing director (MD) and chairwoman of the IMF is Bulgarian economist Kristalina Georgieva, who has held the post since October 1, 2019. Indian-American economist Gita Gopinath, previously the chief economist, was appointed as first deputy managing director, effective January 21, 2022. Pierre-Olivier Gourinchas was appointed chief economist on January 24, 2022.",
"title": ""
},
{
"paragraph_id": 3,
"text": "According to the IMF itself, it works to foster global growth and economic stability by providing policy advice and financing the members by working with developing countries to help them achieve macroeconomic stability and reduce poverty. The rationale for this is that private international capital markets function imperfectly and many countries have limited access to financial markets. Such market imperfections, together with balance-of-payments financing, provide the justification for official financing, without which many countries could only correct large external payment imbalances through measures with adverse economic consequences. The IMF provides alternate sources of financing such as the Poverty Reduction and Growth Facility.",
"title": "Functions"
},
{
"paragraph_id": 4,
"text": "Upon the founding of the IMF, its three primary functions were:",
"title": "Functions"
},
{
"paragraph_id": 5,
"text": "The IMF's role was fundamentally altered by the floating exchange rates after 1971. It shifted to examining the economic policies of countries with IMF loan agreements to determine whether a shortage of capital was due to economic fluctuations or economic policy. The IMF also researched what types of government policy would ensure economic recovery. A particular concern of the IMF was to prevent financial crises, such as those in Mexico in 1982, Brazil in 1987, East Asia in 1997–98, and Russia in 1998, from spreading and threatening the entire global financial and currency system. The challenge was to promote and implement a policy that reduced the frequency of crises among emerging market countries, especially the middle-income countries which are vulnerable to massive capital outflows. Rather than maintaining a position of oversight of only exchange rates, their function became one of surveillance of the overall macroeconomic performance of member countries. Their role became a lot more active because the IMF now manages economic policy rather than just exchange rates.",
"title": "Functions"
},
{
"paragraph_id": 6,
"text": "In addition, the IMF negotiates conditions on lending and loans under their policy of conditionality, which was established in the 1950s. Low-income countries can borrow on concessional terms, which means there is a period of time with no interest rates, through the Extended Credit Facility (ECF), the Standby Credit Facility (SCF) and the Rapid Credit Facility (RCF). Non-concessional loans, which include interest rates, are provided mainly through the Stand-By Arrangements (SBA), the Flexible Credit Line (FCL), the Precautionary and Liquidity Line (PLL), and the Extended Fund Facility. The IMF provides emergency assistance via the Rapid Financing Instrument (RFI) to members facing urgent balance-of-payments needs.",
"title": "Functions"
},
{
"paragraph_id": 7,
"text": "The IMF is mandated to oversee the international monetary and financial system and monitor the economic and financial policies of its member countries. This activity is known as surveillance and facilitates international co-operation. Since the demise of the Bretton Woods system of fixed exchange rates in the early 1970s, surveillance has evolved largely by way of changes in procedures rather than through the adoption of new obligations. The responsibilities changed from those of guardians to those of overseers of members' policies.",
"title": "Functions"
},
{
"paragraph_id": 8,
"text": "The Fund typically analyses the appropriateness of each member country's economic and financial policies for achieving orderly economic growth, and assesses the consequences of these policies for other countries and for the global economy. For instance, The IMF played a significant role in individual countries, such as Armenia and Belarus, in providing financial support to achieve stabilization financing from 2009 to 2019. The maximum sustainable debt level of a polity, which is watched closely by the IMF, was defined in 2011 by IMF economists to be 120%. Indeed, it was at this number that the Greek economy melted down in 2010.",
"title": "Functions"
},
{
"paragraph_id": 9,
"text": "In 1995, the International Monetary Fund began to work on data dissemination standards with the view of guiding IMF member countries to disseminate their economic and financial data to the public. The International Monetary and Financial Committee (IMFC) endorsed the guidelines for the dissemination standards and they were split into two tiers: The General Data Dissemination System (GDDS) and the Special Data Dissemination Standard (SDDS).",
"title": "Functions"
},
{
"paragraph_id": 10,
"text": "The executive board approved the SDDS and GDDS in 1996 and 1997, respectively, and subsequent amendments were published in a revised Guide to the General Data Dissemination System. The system is aimed primarily at statisticians and aims to improve many aspects of statistical systems in a country. It is also part of the World Bank Millennium Development Goals (MDG) and Poverty Reduction Strategic Papers (PRSPs).",
"title": "Functions"
},
{
"paragraph_id": 11,
"text": "The primary objective of the GDDS is to encourage member countries to build a framework to improve data quality and statistical capacity building to evaluate statistical needs, set priorities in improving timeliness, transparency, reliability, and accessibility of financial and economic data. Some countries initially used the GDDS, but later upgraded to SDDS.",
"title": "Functions"
},
{
"paragraph_id": 12,
"text": "Some entities that are not IMF members also contribute statistical data to the systems:",
"title": "Functions"
},
{
"paragraph_id": 13,
"text": "A 2021 study found that the IMF's surveillance activities have \"a substantial impact on sovereign debt with much greater impacts in emerging than high-income economies\".",
"title": "Functions"
},
{
"paragraph_id": 14,
"text": "World Economic Outlook is a survey, published twice a year, by International Monetary Fund staff, which analyzes the global economy in the near and medium term.",
"title": "Functions"
},
{
"paragraph_id": 15,
"text": "IMF conditionality is a set of policies or conditions that the IMF requires in exchange for financial resources. The IMF does require collateral from countries for loans but also requires the government seeking assistance to correct its macroeconomic imbalances in the form of policy reform. If the conditions are not met, the funds are withheld. The concept of conditionality was introduced in a 1952 executive board decision and later incorporated into the Articles of Agreement.",
"title": "Functions"
},
{
"paragraph_id": 16,
"text": "Conditionality is associated with economic theory as well as an enforcement mechanism for repayment. Stemming primarily from the work of Jacques Polak, the theoretical underpinning of conditionality was the \"monetary approach to the balance of payments\".",
"title": "Functions"
},
{
"paragraph_id": 17,
"text": "Some of the conditions for structural adjustment can include:",
"title": "Functions"
},
{
"paragraph_id": 18,
"text": "These conditions are known as the Washington Consensus.",
"title": "Functions"
},
{
"paragraph_id": 19,
"text": "These loan conditions ensure that the borrowing country will be able to repay the IMF and that the country will not attempt to solve their balance-of-payment problems in a way that would negatively impact the international economy. The incentive problem of moral hazard—when economic agents maximise their own utility to the detriment of others because they do not bear the full consequences of their actions—is mitigated through conditions rather than providing collateral; countries in need of IMF loans do not generally possess internationally valuable collateral anyway.",
"title": "Functions"
},
{
"paragraph_id": 20,
"text": "Conditionality also reassures the IMF that the funds lent to them will be used for the purposes defined by the Articles of Agreement and provides safeguards that the country will be able to rectify its macroeconomic and structural imbalances. In the judgment of the IMF, the adoption by the member of certain corrective measures or policies will allow it to repay the IMF, thereby ensuring that the resources will be available to support other members.",
"title": "Functions"
},
{
"paragraph_id": 21,
"text": "As of 2004, borrowing countries have had a good track record for repaying credit extended under the IMF's regular lending facilities with full interest over the duration of the loan. This indicates that IMF lending does not impose a burden on creditor countries, as lending countries receive market-rate interest on most of their quota subscription, plus any of their own-currency subscriptions that are loaned out by the IMF, plus all of the reserve assets that they provide the IMF.",
"title": "Functions"
},
{
"paragraph_id": 22,
"text": "The IMF was originally laid out as a part of the Bretton Woods system exchange agreement in 1944. During the Great Depression, countries sharply raised barriers to trade in an attempt to improve their failing economies. This led to the devaluation of national currencies and a decline in world trade.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "This breakdown in international monetary cooperation created a need for oversight. The representatives of 45 governments met at the Bretton Woods Conference in the Mount Washington Hotel in Bretton Woods, New Hampshire, in the United States, to discuss a framework for postwar international economic cooperation and how to rebuild Europe.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "There were two views on the role the IMF should assume as a global economic institution. American delegate Harry Dexter White foresaw an IMF that functioned more like a bank, making sure that borrowing states could repay their debts on time. Most of White's plan was incorporated into the final acts adopted at Bretton Woods. British economist John Maynard Keynes, on the other hand, imagined that the IMF would be a cooperative fund upon which member states could draw to maintain economic activity and employment through periodic crises. This view suggested an IMF that helped governments and act as the United States government had during the New Deal to the great depression of the 1930s.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The IMF formally came into existence on 27 December 1945, when the first 29 countries ratified its Articles of Agreement. By the end of 1946 the IMF had grown to 39 members. On 1 March 1947, the IMF began its financial operations, and on 8 May France became the first country to borrow from it.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The IMF was one of the key organizations of the international economic system; its design allowed the system to balance the rebuilding of international capitalism with the maximization of national economic sovereignty and human welfare, also known as embedded liberalism. The IMF's influence in the global economy steadily increased as it accumulated more members. Its membership began to expand in the late 1950s and during the 1960s as many African countries became independent and applied for membership. But the Cold War limited the Fund's membership, with most countries in the Soviet sphere of influence not joining until 1970s and 1980s.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The Bretton Woods exchange rate system prevailed until 1971 when the United States government suspended the convertibility of the US$ (and dollar reserves held by other governments) into gold. This is known as the Nixon Shock. The changes to the IMF articles of agreement reflecting these changes were ratified in 1976 by the Jamaica Accords. Later in the 1970s, large commercial banks began lending to states because they were awash in cash deposited by oil exporters. The lending of the so-called money center banks led to the IMF changing its role in the 1980s after a world recession provoked a crisis that brought the IMF back into global financial governance.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In the mid-1980s, the IMF shifted its narrow focus from currency stabilization to a broader focus of promoting market-liberalizing reforms through structural adjustment programs. This shift occurred without a formal renegotiation of the organization's charter or operational guidelines. The Ronald Reagan administration, in particular Treasury Secretary James Baker, his assistant secretary David Mulford and deputy assistant secretary Charles Dallara, pressured the IMF to attach market-liberal reforms to the organization's conditional loans.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "During the 20th century, the IMF shifted its position on capital controls. Whereas the IMF permitted capital controls at its founding and throughout the 1970s, IMF staff increasingly favored free capital movement from 1980s onwards. This shift happened in the aftermath of an emerging consensus in economics on the desirability of free capital movement, retirement of IMF staff hired in the 1940s and 1950s, and the recruitment of staff exposed to new thinking in economics.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The IMF provided two major lending packages in the early 2000s to Argentina (during the 1998–2002 Argentine great depression) and Uruguay (after the 2002 Uruguay banking crisis). However, by the mid-2000s, IMF lending was at its lowest share of world GDP since the 1970s.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In May 2010, the IMF participated, in 3:11 proportion, in the first Greek bailout that totaled €110 billion, to address the great accumulation of public debt, caused by continuing large public sector deficits. As part of the bailout, the Greek government agreed to adopt austerity measures that would reduce the deficit from 11% in 2009 to \"well below 3%\" in 2014. The bailout did not include debt restructuring measures such as a haircut, to the chagrin of the Swiss, Brazilian, Indian, Russian, and Argentinian Directors of the IMF, with the Greek authorities themselves (at the time, PM George Papandreou and Finance Minister Giorgos Papakonstantinou) ruling out a haircut.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "A second bailout package of more than €100 billion was agreed upon over the course of a few months from October 2011, during which time Papandreou was forced from office. The so-called Troika, of which the IMF is part, are joint managers of this programme, which was approved by the executive directors of the IMF on 15 March 2012 for XDR 23.8 billion and saw private bondholders take a haircut of upwards of 50%. In the interval between May 2010 and February 2012 the private banks of Holland, France, and Germany reduced exposure to Greek debt from €122 billion to €66 billion.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "As of January 2012, the largest borrowers from the IMF in order were Greece, Portugal, Ireland, Romania, and Ukraine.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "On 25 March 2013, a €10 billion international bailout of Cyprus was agreed by the Troika, at the cost to the Cypriots of its agreement: to close the country's second-largest bank; to impose a one-time bank deposit levy on Bank of Cyprus uninsured deposits. No insured deposit of €100k or less were to be affected under the terms of a novel bail-in scheme.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The topic of sovereign debt restructuring was taken up by the IMF in April 2013, for the first time since 2005, in a report entitled \"Sovereign Debt Restructuring: Recent Developments and Implications for the Fund's Legal and Policy Framework\". The paper, which was discussed by the board on 20 May, summarised the recent experiences in Greece, St Kitts and Nevis, Belize, and Jamaica. An explanatory interview with deputy director Hugh Bredenkamp was published a few days later, as was a deconstruction by Matina Stevis of The Wall Street Journal.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In the October 2013, Fiscal Monitor publication, the IMF suggested that a capital levy capable of reducing Euro-area government debt ratios to \"end-2007 levels\" would require a very high tax rate of about 10%.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The Fiscal Affairs department of the IMF, headed at the time by Acting Director Sanjeev Gupta, produced a January 2014 report entitled \"Fiscal Policy and Income Inequality\" that stated that \"Some taxes levied on wealth, especially on immovable property, are also an option for economies seeking more progressive taxation ... Property taxes are equitable and efficient, but underutilized in many economies ... There is considerable scope to exploit this tax more fully, both as a revenue source and as a redistributive instrument.\"",
"title": "History"
},
{
"paragraph_id": 38,
"text": "At the end of March 2014, the IMF secured an $18 billion bailout fund for the provisional government of Ukraine in the aftermath of the Revolution of Dignity.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "In late 2019, the IMF estimated global growth in 2020 to reach 3.4%, but due to the coronavirus, in November 2020, it expected the global economy to shrink by 4.4%.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "In March 2020, Kristalina Georgieva announced that the IMF stood ready to mobilize $1 trillion as its response to the COVID-19 pandemic. This was in addition to the $50 billion fund it had announced two weeks earlier, of which $5 billion had already been requested by Iran. One day earlier on 11 March, the UK called to pledge £150 million to the IMF catastrophe relief fund. It came to light on 27 March that \"more than 80 poor and middle-income countries\" had sought a bailout due to the coronavirus.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "On 13 April 2020, the IMF said that it \"would provide immediate debt relief to 25 member countries under its Catastrophe Containment and Relief Trust (CCRT)\" programme.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "Not all member countries of the IMF are sovereign states, and therefore not all \"member countries\" of the IMF are members of the United Nations. Amidst \"member countries\" of the IMF that are not member states of the UN are non-sovereign areas with special jurisdictions that are officially under the sovereignty of full UN member states, such as Aruba, Curaçao, Hong Kong, and Macao, as well as Kosovo. The corporate members appoint ex-officio voting members, who are listed below. All members of the IMF are also International Bank for Reconstruction and Development (IBRD) members and vice versa.",
"title": "Member countries"
},
{
"paragraph_id": 43,
"text": "Former members are Cuba (which left in 1964), and Taiwan, which was ejected from the IMF in 1980 after losing the support of the then United States President Jimmy Carter and was replaced by the People's Republic of China. However, \"Taiwan Province of China\" is still listed in the official IMF indices.",
"title": "Member countries"
},
{
"paragraph_id": 44,
"text": "Apart from Cuba, the other UN states that do not belong to the IMF are Liechtenstein, Monaco and North Korea. However, Andorra became the 190th member on 16 October 2020.",
"title": "Member countries"
},
{
"paragraph_id": 45,
"text": "Poland withdrew in 1950—allegedly pressured by the Soviet Union—but returned in 1986. The former Czechoslovakia was expelled in 1954 for \"failing to provide required data\" and was readmitted in 1990, after the Velvet Revolution.",
"title": "Member countries"
},
{
"paragraph_id": 46,
"text": "Any country may apply to be a part of the IMF. Post-IMF formation, in the early postwar period, rules for IMF membership were left relatively loose. Members needed to make periodic membership payments towards their quota, to refrain from currency restrictions unless granted IMF permission, to abide by the Code of Conduct in the IMF Articles of Agreement, and to provide national economic information. However, stricter rules were imposed on governments that applied to the IMF for funding.",
"title": "Member countries"
},
{
"paragraph_id": 47,
"text": "The countries that joined the IMF between 1945 and 1971 agreed to keep their exchange rates secured at rates that could be adjusted only to correct a \"fundamental disequilibrium\" in the balance of payments, and only with the IMF's agreement.",
"title": "Member countries"
},
{
"paragraph_id": 48,
"text": "Member countries of the IMF have access to information on the economic policies of all member countries, the opportunity to influence other members' economic policies, technical assistance in banking, fiscal affairs, and exchange matters, financial support in times of payment difficulties, and increased opportunities for trade and investment.",
"title": "Member countries"
},
{
"paragraph_id": 49,
"text": "The board of governors consists of one governor and one alternate governor for each member country. Each member country appoints its two governors. The Board normally meets once a year and is responsible for electing or appointing an executive director to the executive board. While the board of governors is officially responsible for approving quota increases, special drawing right allocations, the admittance of new members, compulsory withdrawal of members, and amendments to the Articles of Agreement and By-Laws, in practice it has delegated most of its powers to the IMF's executive board.",
"title": "Personnel"
},
{
"paragraph_id": 50,
"text": "The board of governors is advised by the International Monetary and Financial Committee and the Development Committee. The International Monetary and Financial Committee has 24 members and monitors developments in global liquidity and the transfer of resources to developing countries. The Development Committee has 25 members and advises on critical development issues and on financial resources required to promote economic development in developing countries.",
"title": "Personnel"
},
{
"paragraph_id": 51,
"text": "The board of governors reports directly to the managing director of the IMF, Kristalina Georgieva.",
"title": "Personnel"
},
{
"paragraph_id": 52,
"text": "24 Executive Directors make up the executive board. The executive directors represent all 189 member countries in a geographically based roster. Countries with large economies have their own executive director, but most countries are grouped in constituencies representing four or more countries.",
"title": "Personnel"
},
{
"paragraph_id": 53,
"text": "Following the 2008 Amendment on Voice and Participation which came into effect in March 2011, seven countries each appoint an executive director: the United States, Japan, China, Germany, France, the United Kingdom, and Saudi Arabia. The remaining 17 Directors represent constituencies consisting of 2 to 23 countries. This Board usually meets several times each week. The board membership and constituency is scheduled for periodic review every eight years.",
"title": "Personnel"
},
{
"paragraph_id": 54,
"text": "The IMF is led by a managing director, who is head of the staff and serves as chairman of the executive board. The managing director is the most powerful position at the IMF. Historically, the IMF's managing director has been a European citizen and the president of the World Bank has been an American citizen. However, this standard is increasingly being questioned and competition for these two posts may soon open up to include other qualified candidates from any part of the world. In August 2019, the International Monetary Fund has removed the age limit which is 65 or over for its managing director position.",
"title": "Personnel"
},
{
"paragraph_id": 55,
"text": "In 2011, the world's largest developing countries, the BRIC states, issued a statement declaring that the tradition of appointing a European as managing director undermined the legitimacy of the IMF and called for the appointment to be merit-based.",
"title": "Personnel"
},
{
"paragraph_id": 56,
"text": "Former managing director Dominique Strauss-Kahn was arrested in connection with charges of sexually assaulting a New York hotel room attendant and resigned on 18 May. The charges were later dropped. On 28 June 2011 Christine Lagarde was confirmed as managing director of the IMF for a five-year term starting on 5 July 2011. She was re-elected by consensus for a second five-year term, starting 5 July 2016, being the only candidate nominated for the post of managing director.",
"title": "Personnel"
},
{
"paragraph_id": 57,
"text": "The managing director is assisted by a First Deputy managing director (FDMD) who, by convention, has always been a citizen of the United States. Together, the managing director and their First Deputy lead the senior management of the IMF. Like the managing director, the First Deputy traditionally serves a five-year term.",
"title": "Personnel"
},
{
"paragraph_id": 58,
"text": "The chief economist leads the research division of the IMF and is a \"senior official\" of the IMF.",
"title": "Personnel"
},
{
"paragraph_id": 59,
"text": "IMF staff have considerable autonomy and are known to shape IMF policy. According to Jeffrey Chwieroth, \"It is the staff members who conduct the bulk of the IMF's tasks; they formulate policy proposals for consideration by member states, exercise surveillance, carry out loan negotiations and design the programs, and collect and systematize detailed information.\" Most IMF staff are economists. According to a 1968 study, nearly 60% of staff were from English-speaking developed countries. By 2004, between 40 and 50% of staff were from English-speaking developed countries.",
"title": "Personnel"
},
{
"paragraph_id": 60,
"text": "A 1996 study found that 90% of new staff with a PhD obtained them from universities in the United States or Canada. A 1999 study found that none of the new staff with a PhD obtained their PhD in the Global South.",
"title": "Personnel"
},
{
"paragraph_id": 61,
"text": "Voting power in the IMF is based on a quota system. Each member has a number of basic votes, equal to 5.502% of the total votes, plus one additional vote for each special drawing right (SDR) of 100,000 of a member country's quota. The SDR is the unit of account of the IMF and represents a potential claim to currency. It is based on a basket of key international currencies. The basic votes generate a slight bias in favour of small countries, but the additional votes determined by SDR outweigh this bias. Changes in the voting shares require approval by a super-majority of 85% of voting power.",
"title": "Voting power"
},
{
"paragraph_id": 62,
"text": "In December 2015, the United States Congress adopted a legislation authorising the 2010 Quota and Governance Reforms. As a result,",
"title": "Voting power"
},
{
"paragraph_id": 63,
"text": "The IMF's quota system was created to raise funds for loans. Each IMF member country is assigned a quota, or contribution, that reflects the country's relative size in the global economy. Each member's quota also determines its relative voting power. Thus, financial contributions from member governments are linked to voting power in the organization.",
"title": "Voting power"
},
{
"paragraph_id": 64,
"text": "This system follows the logic of a shareholder-controlled organization: wealthy countries have more say in the making and revision of rules. Since decision making at the IMF reflects each member's relative economic position in the world, wealthier countries that provide more money to the IMF have more influence than poorer members that contribute less; nonetheless, the IMF focuses on redistribution.",
"title": "Voting power"
},
{
"paragraph_id": 65,
"text": "Quotas are normally reviewed every five years and can be increased when deemed necessary by the board of governors. IMF voting shares are relatively inflexible: countries that grow economically have tended to become under-represented as their voting power lags behind. Currently, reforming the representation of developing countries within the IMF has been suggested. These countries' economies represent a large portion of the global economic system but this is not reflected in the IMF's decision-making process through the nature of the quota system. Joseph Stiglitz argues, \"There is a need to provide more effective voice and representation for developing countries, which now represent a much larger portion of world economic activity since 1944, when the IMF was created.\" In 2008, a number of quota reforms were passed including shifting 6% of quota shares to dynamic emerging markets and developing countries.",
"title": "Voting power"
},
{
"paragraph_id": 66,
"text": "The IMF's membership is divided along income lines: certain countries provide financial resources while others use these resources. Both developed country \"creditors\" and developing country \"borrowers\" are members of the IMF. The developed countries provide the financial resources but rarely enter into IMF loan agreements; they are the creditors. Conversely, the developing countries use the lending services but contribute little to the pool of money available to lend because their quotas are smaller; they are the borrowers. Thus, tension is created around governance issues because these two groups, creditors and borrowers, have fundamentally different interests.",
"title": "Voting power"
},
{
"paragraph_id": 67,
"text": "The criticism is that the system of voting power distribution through a quota system institutionalizes borrower subordination and creditor dominance. The resulting division of the IMF's membership into borrowers and non-borrowers has increased the controversy around conditionality because the borrowers are interested in increasing loan access while creditors want to maintain reassurance that the loans will be repaid.",
"title": "Voting power"
},
{
"paragraph_id": 68,
"text": "A recent source revealed that the average overall use of IMF credit per decade increased, in real terms, by 21% between the 1970s and 1980s, and increased again by just over 22% from the 1980s to the 1991–2005 period. Another study has suggested that since 1950 the continent of Africa alone has received $300 billion from the IMF, the World Bank, and affiliate institutions.",
"title": "Use"
},
{
"paragraph_id": 69,
"text": "A study by Bumba Mukherjee found that developing democratic countries benefit more from IMF programs than developing autocratic countries because policy-making, and the process of deciding where loaned money is used, is more transparent within a democracy. One study done by Randall Stone found that although earlier studies found little impact of IMF programs on balance of payments, more recent studies using more sophisticated methods and larger samples \"usually found IMF programs improved the balance of payments\".",
"title": "Use"
},
{
"paragraph_id": 70,
"text": "The Exceptional Access Framework was created in 2003 when John B. Taylor was Under Secretary of the US Treasury for International Affairs. The new Framework became fully operational in February 2003 and it was applied in the subsequent decisions on Argentina and Brazil. Its purpose was to place some sensible rules and limits on the way the IMF makes loans to support governments with debt problem—especially in emerging markets—and thereby move away from the bailout mentality of the 1990s. Such a reform was essential for ending the crisis atmosphere that then existed in emerging markets. The reform was closely related to and put in place nearly simultaneously with the actions of several emerging market countries to place collective action clauses in their bond contracts.",
"title": "Use"
},
{
"paragraph_id": 71,
"text": "In 2010, the framework was abandoned so the IMF could make loans to Greece in an unsustainable and political situation.",
"title": "Use"
},
{
"paragraph_id": 72,
"text": "The topic of sovereign debt restructuring was taken up by IMF staff in April 2013 for the first time since 2005, in a report entitled \"Sovereign Debt Restructuring: Recent Developments and Implications for the Fund's Legal and Policy Framework\". The paper, which was discussed by the board on 20 May, summarised the recent experiences in Greece, St Kitts and Nevis, Belize, and Jamaica. An explanatory interview with deputy director Hugh Bredenkamp was published a few days later, as was a deconstruction by Matina Stevis of The Wall Street Journal.",
"title": "Use"
},
{
"paragraph_id": 73,
"text": "The staff was directed to formulate an updated policy, which was accomplished on 22 May 2014 with a report entitled \"The Fund's Lending Framework and Sovereign Debt: Preliminary Considerations\", and taken up by the executive board on 13 June. The staff proposed that \"in circumstances where a (Sovereign) member has lost market access and debt is considered sustainable ... the IMF would be able to provide Exceptional Access on the basis of a debt operation that involves an extension of maturities\", which was labeled a \"reprofiling operation\". These reprofiling operations would \"generally be less costly to the debtor and creditors—and thus to the system overall—relative to either an upfront debt reduction operation or a bail-out that is followed by debt reduction ... (and) would be envisaged only when both (a) a member has lost market access and (b) debt is assessed to be sustainable, but not with high probability ... Creditors will only agree if they understand that such an amendment is necessary to avoid a worse outcome: namely, a default and/or an operation involving debt reduction ... Collective action clauses, which now exist in most—but not all—bonds would be relied upon to address collective action problems.\"",
"title": "Use"
},
{
"paragraph_id": 74,
"text": "According to a 2002 study by Randall W. Stone, the academic literature on the IMF shows \"no consensus on the long-term effects of IMF programs on growth\".",
"title": "Impact"
},
{
"paragraph_id": 75,
"text": "Some research has found that IMF loans can reduce the chance of a future banking crisis, while other studies have found that they can increase the risk of political crises. IMF programs can reduce the effects of a currency crisis.",
"title": "Impact"
},
{
"paragraph_id": 76,
"text": "Some research has found that IMF programs are less effective in countries which possess a developed-country patron (be it by foreign aid, membership of postcolonial institutions or UN voting patterns), seemingly due to this patron allowing countries to flaunt IMF program rules as these rules are not consistently enforced. Some research has found that IMF loans reduce economic growth due to creating an economic moral hazard, reducing public investment, reducing incentives to create a robust domestic policies and reducing private investor confidence. Other research has indicated that IMF loans can have a positive impact on economic growth and that their effects are highly nuanced.",
"title": "Impact"
},
{
"paragraph_id": 77,
"text": "Overseas Development Institute (ODI) research undertaken in 1980 included criticisms of the IMF which support the analysis that it is a pillar of what activist Titus Alexander calls global apartheid.",
"title": "Criticisms"
},
{
"paragraph_id": 78,
"text": "ODI conclusions were that the IMF's very nature of promoting market-oriented approaches attracted unavoidable criticism. On the other hand, the IMF could serve as a scapegoat while allowing governments to blame international bankers. The ODI conceded that the IMF was insensitive to political aspirations of LDCs while its policy conditions were inflexible.",
"title": "Criticisms"
},
{
"paragraph_id": 79,
"text": "Argentina, which had been considered by the IMF to be a model country in its compliance to policy proposals by the Bretton Woods institutions, experienced a catastrophic economic crisis in 2001, which some believe to have been caused by IMF-induced budget restrictions—which undercut the government's ability to sustain national infrastructure even in crucial areas such as health, education, and security—and privatisation of strategically vital national resources. Others attribute the crisis to Argentina's misdesigned fiscal federalism, which caused subnational spending to increase rapidly. The crisis added to widespread hatred of this institution in Argentina and other South American countries, with many blaming the IMF for the region's economic problems. The current—as of early 2006—trend toward moderate left-wing governments in the region and a growing concern with the development of a regional economic policy largely independent of big business pressures has been ascribed to this crisis.",
"title": "Criticisms"
},
{
"paragraph_id": 80,
"text": "In 2006, a senior ActionAid policy analyst Akanksha Marphatia stated that IMF policies in Africa undermine any possibility of meeting the Millennium Development Goals (MDGs) due to imposed restrictions that prevent spending on important sectors, such as education and health.",
"title": "Criticisms"
},
{
"paragraph_id": 81,
"text": "In an interview (2008-05-19), the former Romanian Prime Minister Călin Popescu-Tăriceanu claimed that \"Since 2005, IMF is constantly making mistakes when it appreciates the country's economic performances\". Former Tanzanian President Julius Nyerere, who claimed that debt-ridden African states were ceding sovereignty to the IMF and the World Bank, famously asked, \"Who elected the IMF to be the ministry of finance for every country in the world?\"",
"title": "Criticisms"
},
{
"paragraph_id": 82,
"text": "Former chief economist of IMF and former Reserve Bank of India (RBI) Governor Raghuram Rajan who predicted the financial crisis of 2007–08 criticised the IMF for remaining a sideline player to the developed world. He criticised the IMF for praising the monetary policies of the US, which he believed were wreaking havoc in emerging markets. He had been critical of \"ultra-loose money policies\" of some unnamed countries.",
"title": "Criticisms"
},
{
"paragraph_id": 83,
"text": "Countries such as Zambia have not received proper aid with long-lasting effects, leading to concern from economists. Since 2005, Zambia (as well as 29 other African countries) did receive debt write-offs, which helped with the country's medical and education funds. However, Zambia returned to a debt of over half its GDP in less than a decade. American economist William Easterly, sceptical of the IMF's methods, had initially warned that \"debt relief would simply encourage more reckless borrowing by crooked governments unless it was accompanied by reforms to speed up economic growth and improve governance\", according to The Economist.",
"title": "Criticisms"
},
{
"paragraph_id": 84,
"text": "The IMF has been criticised for being \"out of touch\" with local economic conditions, cultures, and environments in the countries they are requiring policy reform. The economic advice the IMF gives might not always take into consideration the difference between what spending means on paper and how it is felt by citizens. Countries charge that with excessive conditionality, they do not \"own\" the programmes and the links are broken between a recipient country's people, its government, and the goals being pursued by the IMF.",
"title": "Criticisms"
},
{
"paragraph_id": 85,
"text": "Jeffrey Sachs argues that the IMF's \"usual prescription is 'budgetary belt tightening to countries who are much too poor to own belts'\". Sachs wrote that the IMF's role as a generalist institution specialising in macroeconomic issues needs reform. Conditionality has also been criticised because a country can pledge collateral of \"acceptable assets\" to obtain waivers—if one assumes that all countries are able to provide \"acceptable collateral\".",
"title": "Criticisms"
},
{
"paragraph_id": 86,
"text": "One view is that conditionality undermines domestic political institutions. The recipient governments are sacrificing policy autonomy in exchange for funds, which can lead to public resentment of the local leadership for accepting and enforcing the IMF conditions. Political instability can result from more leadership turnover as political leaders are replaced in electoral backlashes. IMF conditions are often criticised for reducing government services, thus increasing unemployment.",
"title": "Criticisms"
},
{
"paragraph_id": 87,
"text": "Another criticism is that IMF policies are only designed to address poor governance, excessive government spending, excessive government intervention in markets, and too much state ownership. This assumes that this narrow range of issues represents the only possible problems; everything is standardised and differing contexts are ignored. A country may also be compelled to accept conditions it would not normally accept had they not been in a financial crisis in need of assistance.",
"title": "Criticisms"
},
{
"paragraph_id": 88,
"text": "On top of that, regardless of what methodologies and data sets used, it comes to same the conclusion of exacerbating income inequality. With Gini coefficient, it became clear that countries with IMF policies face increased income inequality.",
"title": "Criticisms"
},
{
"paragraph_id": 89,
"text": "It is claimed that conditionalities retard social stability and hence inhibit the stated goals of the IMF, while Structural Adjustment Programmes lead to an increase in poverty in recipient countries. The IMF sometimes advocates \"austerity programmes\", cutting public spending and increasing taxes even when the economy is weak, to bring budgets closer to a balance, thus reducing budget deficits. Countries are often advised to lower their corporate tax rate. In Globalization and Its Discontents, Joseph E. Stiglitz, former chief economist and senior vice-president at the World Bank, criticises these policies. He argues that by converting to a more monetarist approach, the purpose of the fund is no longer valid, as it was designed to provide funds for countries to carry out Keynesian reflations, and that the IMF \"was not participating in a conspiracy, but it was reflecting the interests and ideology of the Western financial community.\"",
"title": "Criticisms"
},
{
"paragraph_id": 90,
"text": "Stiglitz concludes, \"Modern high-tech warfare is designed to remove physical contact: dropping bombs from 50,000 feet ensures that one does not 'feel' what one does. Modern economic management is similar: from one's luxury hotel, one can callously impose policies about which one would think twice if one knew the people whose lives one was destroying.\"",
"title": "Criticisms"
},
{
"paragraph_id": 91,
"text": "The researchers Eric Toussaint and Damien Millet argue that the IMF's policies amount to a new form of colonisation that does not need a military presence:",
"title": "Criticisms"
},
{
"paragraph_id": 92,
"text": "Following the exigencies of the governments of the richest companies, the IMF, permitted countries in crisis to borrow in order to avoid default on their repayments. Caught in the debt's downward spiral, developing countries soon had no other recourse than to take on new debt in order to repay the old debt. Before providing them with new loans, at higher interest rates, future leaders asked the IMF, to intervene with the guarantee of ulterior reimbursement, asking for a signed agreement with the said countries. The IMF thus agreed to restart the flow of the 'finance pump' on condition that the concerned countries first use this money to reimburse banks and other private lenders, while restructuring their economy at the IMF's discretion: these were the famous conditionalities, detailed in the Structural Adjustment Programmes. The IMF and its ultra-liberal experts took control of the borrowing countries' economic policies. A new form of colonisation was thus instituted. It was not even necessary to establish an administrative or military presence; the debt alone maintained this new form of submission.",
"title": "Criticisms"
},
{
"paragraph_id": 93,
"text": "International politics play an important role in IMF decision making. The clout of member states is roughly proportional to its contribution to IMF finances. The United States has the greatest number of votes and therefore wields the most influence. Domestic politics often come into play, with politicians in developing countries using conditionality to gain leverage over the opposition to influence policy.",
"title": "Criticisms"
},
{
"paragraph_id": 94,
"text": "In 2016, the IMF's research department published a report titled \"Neoliberalism: Oversold?\" which, while praising some aspects of the \"neoliberal agenda\", claims that the organisation has been \"overselling\" fiscal austerity policies and financial deregulation, which they claim has exacerbated both financial crises and economic inequality around the world.",
"title": "Criticisms"
},
{
"paragraph_id": 95,
"text": "In 2020 and 2021, Oxfam criticized the IMF for forcing tough austerity measures on many low income countries during the COVID-19 pandemic, despite forcing cuts to healthcare spending, would hamper the recipient's response to the pandemic.",
"title": "Criticisms"
},
{
"paragraph_id": 96,
"text": "The role of the Bretton Woods institutions has been controversial since the late Cold War, because of claims that the IMF policy makers supported military dictatorships friendly to American and European corporations, but also other anti-communist and Communist regimes (such as Mobutu's Zaire and Ceaușescu's Romania, respectively). Critics also claim that the IMF is generally apathetic or hostile to human rights, and labour rights. The controversy has helped spark the anti-globalization movement.",
"title": "Criticisms"
},
{
"paragraph_id": 97,
"text": "An example of IMF's support for a dictatorship was its ongoing support for Mobutu's rule in Zaire, although its own envoy, Erwin Blumenthal, provided a sobering report about the entrenched corruption and embezzlement and the inability of the country to pay back any loans.",
"title": "Criticisms"
},
{
"paragraph_id": 98,
"text": "Arguments in favour of the IMF say that economic stability is a precursor to democracy; however, critics highlight various examples in which democratised countries fell after receiving IMF loans.",
"title": "Criticisms"
},
{
"paragraph_id": 99,
"text": "A 2017 study found no evidence of IMF lending programs undermining democracy in borrowing countries. To the contrary, it found \"evidence for modest but definitively positive conditional differences in the democracy scores of participating and non-participating countries\".",
"title": "Criticisms"
},
{
"paragraph_id": 100,
"text": "On 28 June 2021, the IMF approved a US$1 billion loan to the Ugandan government despite protests from Ugandans in Washington, London and South Africa.",
"title": "Criticisms"
},
{
"paragraph_id": 101,
"text": "A number of civil society organisations have criticised the IMF's policies for their impact on access to food, particularly in developing countries. In October 2008, former United States president Bill Clinton delivered a speech to the United Nations on World Food Day, criticising the World Bank and IMF for their policies on food and agriculture:",
"title": "Criticisms"
},
{
"paragraph_id": 102,
"text": "We need the World Bank, the IMF, all the big foundations, and all the governments to admit that, for 30 years, we all blew it, including me when I was president. We were wrong to believe that food was like some other product in international trade, and we all have to go back to a more responsible and sustainable form of agriculture.",
"title": "Criticisms"
},
{
"paragraph_id": 103,
"text": "The FPIF remarked that there is a recurring pattern: \"the destabilization of peasant producers by a one-two punch of IMF-World Bank structural adjustment programs that gutted government investment in the countryside followed by the massive influx of subsidized U.S. and European Union agricultural imports after the WTO's Agreement on Agriculture pried open markets.\"",
"title": "Criticisms"
},
{
"paragraph_id": 104,
"text": "A 2009 study concluded that the strict conditions resulted in thousands of deaths in Eastern Europe by tuberculosis as public health care had to be weakened. In the 21 countries to which the IMF had given loans, tuberculosis deaths rose by 16.6%. A 2017 systematic review on studies conducted on the impact that Structural adjustment programs have on child and maternal health found that these programs have a detrimental effect on maternal and child health among other adverse effects.",
"title": "Criticisms"
},
{
"paragraph_id": 105,
"text": "The IMF is only one of many international organisations, and it is a generalist institution that deals only with macroeconomic issues; its core areas of concern in developing countries are very narrow. One proposed reform is a movement towards close partnership with other specialist agencies such as UNICEF, the Food and Agriculture Organization (FAO), and the United Nations Development Program (UNDP).",
"title": "Criticisms"
},
{
"paragraph_id": 106,
"text": "Jeffrey Sachs argues in The End of Poverty that the IMF and the World Bank have \"the brightest economists and the lead in advising poor countries on how to break out of poverty, but the problem is development economics\". Development economics needs the reform, not the IMF. He also notes that IMF loan conditions should be paired with other reforms—e.g., trade reform in developed nations, debt cancellation, and increased financial assistance for investments in basic infrastructure. IMF loan conditions cannot stand alone and produce change; they need to be partnered with other reforms or other conditions as applicable.",
"title": "Criticisms"
},
{
"paragraph_id": 107,
"text": "The scholarly consensus is that IMF decision-making is not simply technocratic, but also guided by political and economic concerns. The United States is the IMF's most powerful member, and its influence reaches even into decision-making concerning individual loan agreements. The U.S. has historically been openly opposed to losing what Treasury Secretary Jacob Lew described in 2015 as its \"leadership role\" at the IMF, and the U.S.' \"ability to shape international norms and practices\".",
"title": "Criticisms"
},
{
"paragraph_id": 108,
"text": "Emerging markets were not well-represented for most of the IMF's history: Despite being the most populous country, China's vote share was the sixth largest; Brazil's vote share was smaller than Belgium's. Reforms to give more powers to emerging economies were agreed by the G20 in 2010. The reforms could not pass, however, until they were ratified by the United States Congress, since 85% of the Fund's voting power was required for the reforms to take effect, and the Americans held more than 16% of voting power at the time. After repeated criticism, the U.S. finally ratified the voting reforms at the end of 2015. The OECD countries maintained their overwhelming majority of voting share, and the U.S. in particular retained its share at over 16%.",
"title": "Criticisms"
},
{
"paragraph_id": 109,
"text": "The criticism of the American- and European-dominated IMF has led to what some consider \"disenfranchising the world\" from the governance of the IMF. Raúl Prebisch, the founding secretary-general of the UN Conference on Trade and Development (UNCTAD), wrote that one of \"the conspicuous deficiencies of the general economic theory, from the point of view of the periphery, is its false sense of universality\".",
"title": "Criticisms"
},
{
"paragraph_id": 110,
"text": "Globalization encompasses three institutions: global financial markets and transnational companies, national governments linked to each other in economic and military alliances led by the United States, and rising \"global governments\" such as World Trade Organization (WTO), IMF, and World Bank. Charles Derber argues in his book People Before Profit, \"These interacting institutions create a new global power system where sovereignty is globalized, taking power and constitutional authority away from nations and giving it to global markets and international bodies\". Titus Alexander argues that this system institutionalises global inequality between western countries and the Majority World in a form of global apartheid, in which the IMF is a key pillar.",
"title": "IMF and globalization"
},
{
"paragraph_id": 111,
"text": "The establishment of globalised economic institutions has been both a symptom of and a stimulus for globalisation. The development of the World Bank, the IMF, regional development banks such as the European Bank for Reconstruction and Development (EBRD), and multilateral trade institutions such as the WTO signals a move away from the dominance of the state as the primary actor analysed in international affairs. Globalization has thus been transformative in terms of limiting of state sovereignty over the economy.",
"title": "IMF and globalization"
},
{
"paragraph_id": 112,
"text": "In April 2023, the IMF launched their international central bank digital currency through their Digital Currency Monetary Authority, it will be called the Universal Monetary Unit, or Units for shorthand. The ANSI character will be Ü and will be used to facilitate international banking and international trade between countries and currencies. It will help facilitate SWIFT transactions on cross border transactions at wholesale FX rates instantaneously with real-time settlements. In June, it announced it was working on a platform for central bank digital currencies (CBDCs) that would enable transctions between nations. IMF Managing Director Kristalina Georgieva said that if central banks did not agree on a common platform, cryptocurrencies would fill the resulting vacuum.",
"title": "IMF and globalization"
},
{
"paragraph_id": 113,
"text": "Managing Director Lagarde (2011–2019) was convicted of giving preferential treatment to businessman-turned-politician Bernard Tapie as he pursued a legal challenge against the French government. At the time, Lagarde was the French economic minister. Within hours of her conviction, in which she escaped any punishment, the fund's 24-member executive board put to rest any speculation that she might have to resign, praising her \"outstanding leadership\" and the \"wide respect\" she commands around the world.",
"title": "Scandals"
},
{
"paragraph_id": 114,
"text": "Former IMF Managing Director Rodrigo Rato was arrested in 2015 for alleged fraud, embezzlement and money laundering. In 2017, the Audiencia Nacional found Rato guilty of embezzlement and sentenced him to 4+1⁄2 years' imprisonment. In 2018, the sentence was confirmed by the Supreme Court of Spain.",
"title": "Scandals"
},
{
"paragraph_id": 115,
"text": "In March 2011, the Ministers of Economy and Finance of the African Union proposed to establish an African Monetary Fund.",
"title": "Alternatives"
},
{
"paragraph_id": 116,
"text": "At the 6th BRICS summit in July 2014 the BRICS nations (Brazil, Russia, India, China, and South Africa) announced the BRICS Contingent Reserve Arrangement (CRA) with an initial size of US$100 billion, a framework to provide liquidity through currency swaps in response to actual or potential short-term balance-of-payments pressures.",
"title": "Alternatives"
},
{
"paragraph_id": 117,
"text": "In 2014, the China-led Asian Infrastructure Investment Bank was established.",
"title": "Alternatives"
},
{
"paragraph_id": 118,
"text": "Life and Debt, a documentary film, deals with the IMF's policies' influence on Jamaica and its economy from a critical point of view. Debtocracy, a 2011 independent Greek documentary film, also criticises the IMF. Portuguese musician José Mário Branco's 1982 album FMI is inspired by the IMF's intervention in Portugal through monitored stabilisation programs in 1977–78. In the 2015 film Our Brand Is Crisis, the IMF is mentioned as a point of political contention, where the Bolivian population fears its electoral interference.",
"title": "In the media"
}
]
| The International Monetary Fund (IMF) is a major financial agency of the United Nations, and an international financial institution funded by 190 member countries, with headquarters in Washington, D.C. It is regarded as the global lender of last resort to national governments, and a leading supporter of exchange-rate stability. Its stated mission is "working to foster global monetary cooperation, secure financial stability, facilitate international trade, promote high employment and sustainable economic growth, and reduce poverty around the world." Established on December 27, 1945 at the Bretton Woods Conference, primarily according to the ideas of Harry Dexter White and John Maynard Keynes, it started with 29 member countries and the goal of reconstructing the international monetary system after World War II. It now plays a central role in the management of balance of payments difficulties and international financial crises. Through a quota system, countries contribute funds to a pool from which countries can borrow if they experience balance of payments problems. As of 2016, the fund had SDR 477 billion. The IMF works to stabilize and foster the economies of its member countries by its use of the fund, as well as other activities such as gathering and analyzing economic statistics and surveillance of its members' economies. IMF funds come from two major sources: quotas and loans. Quotas, which are pooled funds from member nations, generate most IMF funds. The size of members' quotas increase according to their economic and financial importance in the world. The quotas are increased periodically as a means of boosting the IMF's resources in the form of special drawing rights. The current managing director (MD) and chairwoman of the IMF is Bulgarian economist Kristalina Georgieva, who has held the post since October 1, 2019. Indian-American economist Gita Gopinath, previously the chief economist, was appointed as first deputy managing director, effective January 21, 2022. Pierre-Olivier Gourinchas was appointed chief economist on January 24, 2022. | 2001-11-17T15:24:28Z | 2023-12-30T12:58:59Z | [
"Template:Authority control",
"Template:Hidden begin",
"Template:Flagcountry",
"Template:Cite conference",
"Template:Refend",
"Template:Annotated link",
"Template:Cite web",
"Template:Dead link",
"Template:Redirect",
"Template:Use American English",
"Template:Frac",
"Template:Col div",
"Template:Cite journal",
"Template:Harvnb",
"Template:Cite magazine",
"Template:Commons category",
"Template:Further",
"Template:Hidden end",
"Template:Main",
"Template:When",
"Template:Central banks",
"Template:Short description",
"Template:Legend",
"Template:Blockquote",
"Template:Cite news",
"Template:Cbignore",
"Template:Citation",
"Template:Refbegin",
"Template:Library resources box",
"Template:Citation needed",
"Template:More citations needed",
"Template:Nowrap",
"Template:Note",
"Template:Trade",
"Template:International organisations",
"Template:See also",
"Template:Webarchive",
"Template:Wikiquote",
"Template:Official website",
"Template:As of",
"Template:Cite press release",
"Template:Portal",
"Template:Col div end",
"Template:Reflist",
"Template:Cite book",
"Template:Use dmy dates",
"Template:Infobox organization",
"Template:Ill",
"Template:' \"",
"Template:Cite SSRN",
"Template:Economics"
]
| https://en.wikipedia.org/wiki/International_Monetary_Fund |
15,252 | Islands of the Clyde | The Islands of the Firth of Clyde are the fifth largest of the major Scottish island groups after the Inner and Outer Hebrides, Orkney and Shetland. They are situated in the Firth of Clyde between Ayrshire and Argyll and Bute. There are about forty islands and skerries. Only four are inhabited, and only nine are larger than 40 hectares (99 acres). The largest and most populous are Arran and Bute. They are served by dedicated ferry routes, as are Great Cumbrae and Holy Island. Unlike the isles in the four larger Scottish archipelagos, none of the isles in this group are connected to one another or to the mainland by bridges.
The geology and geomorphology of the area is complex, and the islands and the surrounding sea lochs each have distinctive features. The influence of the Atlantic Ocean and the North Atlantic Drift create a mild, damp oceanic climate. There is a diversity of wildlife, including three species of rare endemic trees.
The larger islands have been continuously inhabited since Neolithic times. The cultures of their inhabitants were influenced by the emergence of the kingdom of Dál Riata, beginning in 500 AD. The islands were then politically absorbed into the emerging kingdom of Alba, led by Kenneth MacAlpin. During the early Middle Ages, the islands experienced Viking incursions. In the 13th century, they became part of the Kingdom of Scotland.
The Highland Boundary Fault runs past Bute and through the northern part of Arran. Therefore, from a geological perspective, some of the islands are in the Highlands and some in the Central Lowlands. As a result of Arran's geological similarity to Scotland, it is sometimes referred to as "Scotland in miniature" and the island is a popular destination for geologists. They come to Arran to study its intrusive igneous landforms, such as sills and dykes, as well as its sedimentary and metasedimentary rocks, which range widely in age. Visiting in 1787, the geologist James Hutton found his first example of an unconformity there. The spot where he discovered it is one of the most famous places in the history of the study of geology. The group of weakly metamorphosed rocks that form the Highland Border Complex lie discontinuously along the Highland Boundary Fault. One of the most prominent exposures is along Loch Fad on Bute. Ailsa Craig, which lies some 25 kilometres (16 mi) south of Arran, has been quarried for a rare type of micro-granite containing riebeckite, known as "Ailsite". It is used by Kays of Scotland to make curling stones. (As of 2004, 60 to 70% of all curling stones in use globally were made from granite quarried on the island.)
Like the rest of Scotland, the Firth of Clyde was covered by ice sheets during the Pleistocene ice ages, and the landscape has been much affected by glaciation. Back then, Arran's highest peaks may have been nunataks. Sea-level changes and the isostatic rise of land after the last retreat of the ice created clifflines behind raised beaches, which are a prominent feature of the entire coastline. The action of these forces has made charting the post glacial coastlines a complex task.
The various soil types on the islands reflect their diverse geology. Bute has the most productive land, and it has a pattern of deposits that is typical of the southwest of Scotland. In the eroded valleys, there is a mixture of boulder clay and other glacial deposits. Elsewhere, especially to the south and west, there are raised beach- and marine deposits, which in some places, such as Stravanan, result in a machair landscape inland from the sandy bays.
The Firth of Clyde, in which these islands lie, is north of the Irish Sea and has numerous branching inlets. Some of those inlets, including Loch Goil, Loch Long, Gare Loch, Loch Fyne, and the estuary of the River Clyde, have their own substantial features. In places, the effect of glaciation on the seabed is pronounced. For example, the Firth is 320 metres (1,050 ft) deep between Arran and Bute, even though they are only 8 kilometres (5.0 mi) apart. The islands all stand exposed to wind and tide. Various lighthouses, such as those on Ailsa Craig, Pladda, and Davaar, act as an aid to navigation.
The Firth of Clyde lies between 55 and 56 degrees north latitude. This is the same latitude as Labrador in Canada and north of the Aleutian Islands. However, the influence of the North Atlantic Drift—the northern extension of the Gulf Stream—moderates the winter weather. As a result, the area enjoys a mild, damp oceanic climate. Temperatures are generally cool, averaging about 6 °C (43 °F) in January and 14 °C (57 °F) in July at sea level. Snow seldom lies at sea level, and frosts are generally less frequent than they are on the mainland. In common with most islands off the west coast of Scotland, the average annual rainfall is generally high: between 1,300 mm (51 in) on Bute, in the Cumbraes, and in the south of Arran, and 1,900 mm (75 in) in the north of Arran. The Arran mountains are even wetter: Their summits receive over 2,550 mm (100 in) of rain annually. May, June and July are the sunniest months: on average, there is a total of 200 hours of bright sunshine during that 3-month period each year. Southern Bute benefits from a particularly large number of sunny days.
Mesolithic humans arrived in the area of the Firth of Clyde during the 4th millennium BC, probably from Ireland. This initial arrival was followed by another wave of Neolithic peoples using the same route. In fact, there is some evidence that the Firth of Clyde was a significant route through which mainland Scotland was colonised during the Neolithic period. The inhabitants of Argyll, the Clyde estuary, and elsewhere in western Scotland at that time developed a distinctive style of megalithic structure that is known today as the Clyde cairns. About 100 of these structures have been found. They were used for interment of the dead. They are rectangular or trapezoidal, with a small enclosing chamber into which the person's body was placed. They are faced with large slabs of stone set on end (sometimes subdivided into smaller compartments). They also feature a forecourt area, which may have been used for displays or rituals associated with interment. They are mostly found in Arran, Bute, and Kintyre. It is thought likely that the Clyde cairns were the earliest forms of Neolithic monument constructed by incoming settlers. However, only a few of the cairns have been radiocarbon dated. A cairn at Monamore on Arran has been dated to 3160 BC, although other evidence suggests that it was almost certainly built earlier than that, possibly around 4000 BC. The area also features numerous standing stones dating from prehistoric times, including six stone circles on Machrie Moor in Arran, and other examples on Great Cumbrae and Bute.
Later, Bronze Age settlers also constructed megaliths at various sites. Many of them date from the 2nd millennium BC. However, instead of chambered cairns, these peoples constructed burial cists, which can be found, for example, on Inchmarnock. Evidence of settlement during this period, especially the early part of it, is scant. However, one notable artifact has been found on Bute that dates from around 2000 BC. Known today as the “Queen of the Inch necklace,” it is an article of jewellery made of lignite (commonly called “jet”).
During the early Iron Age, the Brythonic culture held sway. There is no evidence that the Roman occupation of southern Scotland extended into these islands.
Beginning in the 2nd century AD, Irish influence was at work in the region, and by the 6th century, Gaels had established the kingdom of Dál Riata there. Unlike earlier inhabitants, such as the P-Celtic speaking Brythons, these Gaels spoke a form of Gaelic (a modern version of which is still spoken today in the Hebrides). During this period, through the efforts of Saint Ninian and others, Christianity slowly supplanted Druidism. The kingdom of Dál Riata flourished from the rule of Fergus Mór in the late 5th century until the Viking incursions beginning in the late 8th century. Islands close to the shores of modern Ayrshire presumably remained part of the Kingdom of Strathclyde during this period, whilst the main islands became part of the emerging Kingdom of Alba founded by Kenneth MacAlpin (Cináed mac Ailpín).
Beginning in the 9th century and into the 13th century, the Islands of the Clyde constituted a border zone between the Norse Suðreyjar and Scotland, and many of them were under Norse hegemony.
Beginning in the last half of the 12th century, and then into the early 1200s, the islands may well have served as the power base of Somhairle mac Giolla Brighde and his descendants. During this time, the islands seem to have come under the sway of the Steward of Scotland’s authority and to have been taken over by the expanding Stewart lordship.
This western extension of Scottish authority appears to have been one of the factors motivating the Norwegian invasion of the region in 1230, during which the invaders seized Rothesay Castle.
In 1263, Norwegian troops commanded by Haakon Haakonarson repeated the feat, but the ensuing Battle of Largs between Scots and Norwegian forces, which took place on the shores of the Firth of Clyde, was inconclusive as a military contest.
This battle marked an ultimately fatal weakening of Norwegian power in Scotland. Haakon retreated to Orkney, where he died in December 1263, consoled on his death bed by recitations of the old sagas. Following his death, under the 1266 Treaty of Perth, all rights that the Norwegian Crown "had of old therein" in relation to the islands were yielded to the Kingdom of Scotland.
Politically, from the conclusion of the Treaty of Perth in 1266 to the present day, all of the islands of the Clyde have been part of Scotland.
Ecclesiastically, beginning in the early medieval period all of these isles were part of the Diocese of Sodor and Man, based at Peel, on the Isle of Man. After 1387, the seat of the Bishopric of the Isles was relocated to the north, first to Snizort on Skye and then to Iona. This arrangement continued until the Scottish Reformation in the 16th century, when Scotland broke with the Catholic Church.
The mid-1700s marked the beginning of a century of significant change. New forms of transport, industry, and agriculture brought an end to ways of life that had endured for centuries. The Battle of Culloden in 1746 foreshadowed the end of the clan system. These changes improved living standards for some, but came at a cost for others.
In the late 18th and early 19th centuries, Alexander, the 10th Duke of Hamilton (1767–1852), and others implemented a controversial agricultural-reform programme called the Highland Clearances that had a devastating effect on many of Arran's inhabitants. Whole villages were emptied, and the Gaelic culture of the island was dealt a terminal blow. (A memorial to the tenant farmers evicted from the island by this programme was later erected on the shore at Lamlash, funded by a Canadian descendant of some of those evicted.)
From the 1850s to the late 20th century, cargo ships known as “Clyde Puffers” (made famous by an early-20th-century story collection called the Vital Spark), were the workhorses of the islands, carrying a great deal of produce and a great variety of products to and from the islands. In May 1889, the Caledonian Steam Packet Company (CSP) was founded and began operating steamer services to and from Gourock for the Caledonian Railway. The company soon expanded by taking over rival steamer operators. David MacBrayne operated the Glasgow-to-Ardrishaig steamer service, as part of the so-called "Royal Route" to Oban. During the 20th century, many of the islands were developed as tourist resorts along the lines of mainland resorts such as Largs and Troon, but catering for Glaswegians who preferred to holiday "Doon the Watter". In 1973, CSP and MacBraynes combined their Clyde and West Highland operations under the new name of Caledonian MacBrayne. A government-owned corporation, they serve Great Cumbrae, Arran, and Bute, and also run mainland-to-mainland ferries across the firth. Private companies operate services from Arran to Holy Isle, and from McInroy's Point (Gourock) to Hunter's Quay on the Cowal peninsula.
Politically, from 1890 to 1975, most of the islands comprised the traditional County of Bute, and its inhabitants were represented by the county council. Since the 1975 reorganization, however, the islands have been split more or less equally between two modern council authorities: Argyll and Bute, and North Ayrshire. Only Ailsa Craig and Lady Isle in South Ayrshire are not part of either of these two council areas.
Below is a table listing the nine islands of the Firth of Clyde that have an area greater than 40 hectares (approximately 100 acres), showing their population and listing the smaller uninhabited islets adjacent to them (including tidal islets separated only when the tide is higher, and skerries exposed only when the tide is lower).
As of 2001, six of the islands were inhabited, but that included one with only two residents (Davaar), and one with only one resident (Sanda). At the 2011 census, there was no one usually resident on either of these islands.
The islets that lie remote from the larger islands are described separately below.
There are two islets in Gare Loch: Green Island and Perch Rock. Gare Loch is small, but it hosts the Faslane Naval Base, where the UK's Trident nuclear submarines are located. At its southern end, the loch opens into the Firth of Clyde via the Rhu narrows.
There are also several islets in the Kilbrannan Sound, which lies between Arran and the Kintyre peninsula. They are: An Struthlag, Cour Island, Eilean Carrach (Carradale), Eilean Carrach (Skipness), Eilean Grianain, Eilean Sunadale, Gull Isle, Island Ross and Thorn Isle.
(The Norse sagas tell a story about the Kintyre peninsula. In the late 11th century, a king of Norway (Magnus Barefoot) devised a plan to increase his territorial possessions. He persuaded a king of Scotland (Malcolm III or Edgar) to agree that he could take possession of an area of land on the west coast of Scotland if a ship could sail around it. Magnus then arranged for one of his longships to be dragged across the 1.5 kilometres (0.93 mi)-long isthmus at the northern tip of the Kintyre peninsula, which connects Kintyre to the mainland. (The isthmus lies between East Loch Tarbert and West Loch Tarbert). He took command of the ship's tiller himself. Then, declaring that Kintyre had "better land than the best of the Hebrides", he claimed that dragging his ship across the isthmus had been equivalent to “sailing around” the peninsula, and thus that the peninsula counted as “land around which a ship could sail.” As a result of this maneuver, he was able to claim possession of the peninsula, which remained under Norse rule for more than a dozen years.)
There are also several islets and skerries in Loch Fyne, which extends 65 kilometres (40 mi) inland from the Sound of Bute, and is the longest of Scotland's sea lochs. They are: Duncuan Island, Eilean Ardgaddan, Eilean a' Bhuic, Eilean Aoghainn, Eilean a' Chomhraig, Eilean an Dúnain, Eilean Buidhe (Ardmarnock), Eilean Buidhe (Portavadie), Eilean Fraoch, Eilean Math-ghamhna, Eilean Mór, Glas Eilean, Heather Island, Inverneil Island, Kilbride Island, and Liath Eilean.
There are several islets surrounding Horse Isle in North Ayrshire: Broad Rock, East Islet, Halftide Rock, High Rock and North Islet.
Lady Isle lies off the South Ayrshire coast near Troon. At one time it housed "ane old chapell with an excellent spring of water". However, in June 1821, someone set fire to the "turf and pasture". Once the pasture had burned away, gales blew much of the island's soil into the sea. This permanently destroyed the island's ability to support grazing.
There are no islands in Loch Goil or Loch Long, which are fjord-like arms in the northern part of the firth.
Here is a list of places along that shores of the Firth of Clyde that are not islands, but have names that misleadingly suggest they are islands (eilean being Gaelic for "island"): Eilean na Beithe, Portavadie; Eilean Beag, Cove; Eilean Dubh, Dalchenna, Loch Fyne; Eilean nan Gabhar, Melldalloch, Kyles of Bute; Barmore Island, just north of Tarbert, Kintyre; Eilean Aoidh, south of Portavadie; Eilean Leathan, Kilbrannan Sound just south of Torrisdale Bay; Island Muller, Kilbrannan Sound north of Campbeltown.
Around the Firth of Clyde, there are populations of red deer, red squirrel, badger, otter, adder, and common lizard. In the Firth itself, there are harbour porpoises, basking sharks and various species of dolphin. Davaar is home to a population of wild goats.
Over 200 bird species have been recorded as sighted in the area, including the black guillemot, the eider, the peregrine falcon, and the golden eagle. In 1981, there were 28 ptarmigans sighted on Arran, but in 2009 it was reported that extensive surveys had been unable to find any recorded ptarmigans sightings. Similarly, the red-billed chough no longer breeds on the island.
Arran has three species of the rare endemic trees known as Arran Whitebeams: the Scottish or Arran whitebeam; the cut-leaved whitebeam; and the Catacol whitebeam. All of them are found only in Gleann Diomhan, and they are amongst the most endangered tree species in the world. (Gleann Diomhan was formerly part of a designated national nature reserve—the designation was removed in 2011)- and it continues to be part of an area designated as a Site of Special Scientific Interest.) Only 283 Arran whitebeam and 236 cut-leaved whitebeam were recorded as mature trees in 1980, and it is thought that grazing pressures and insect damage are preventing regeneration of the woodland. The Catacol whitebeam was discovered in 2007, but only two specimens have been found, so steps have been taken to protect them.
The Roman historian Tacitus refers to the Clota, meaning the Clyde. The derivation is not certain but is probably from the Brythonic Clouta, which became Clut in Old Welsh. The name literally means "wash", probably referring to a river goddess who is seen as "the washer" or "the strongly flowing one". The derivation of the word “Bute” is also uncertain. The Norse name for it is Bót an Old Irish word for "fire", which might be a reference to signal fires. The etymology of “Arran” is no clearer. Haswell-Smith (2004) suggests that it derive from a Brythonic word meaning "high place", although Watson (1926) suggests it may be pre-Celtic. | [
{
"paragraph_id": 0,
"text": "The Islands of the Firth of Clyde are the fifth largest of the major Scottish island groups after the Inner and Outer Hebrides, Orkney and Shetland. They are situated in the Firth of Clyde between Ayrshire and Argyll and Bute. There are about forty islands and skerries. Only four are inhabited, and only nine are larger than 40 hectares (99 acres). The largest and most populous are Arran and Bute. They are served by dedicated ferry routes, as are Great Cumbrae and Holy Island. Unlike the isles in the four larger Scottish archipelagos, none of the isles in this group are connected to one another or to the mainland by bridges.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The geology and geomorphology of the area is complex, and the islands and the surrounding sea lochs each have distinctive features. The influence of the Atlantic Ocean and the North Atlantic Drift create a mild, damp oceanic climate. There is a diversity of wildlife, including three species of rare endemic trees.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The larger islands have been continuously inhabited since Neolithic times. The cultures of their inhabitants were influenced by the emergence of the kingdom of Dál Riata, beginning in 500 AD. The islands were then politically absorbed into the emerging kingdom of Alba, led by Kenneth MacAlpin. During the early Middle Ages, the islands experienced Viking incursions. In the 13th century, they became part of the Kingdom of Scotland.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Highland Boundary Fault runs past Bute and through the northern part of Arran. Therefore, from a geological perspective, some of the islands are in the Highlands and some in the Central Lowlands. As a result of Arran's geological similarity to Scotland, it is sometimes referred to as \"Scotland in miniature\" and the island is a popular destination for geologists. They come to Arran to study its intrusive igneous landforms, such as sills and dykes, as well as its sedimentary and metasedimentary rocks, which range widely in age. Visiting in 1787, the geologist James Hutton found his first example of an unconformity there. The spot where he discovered it is one of the most famous places in the history of the study of geology. The group of weakly metamorphosed rocks that form the Highland Border Complex lie discontinuously along the Highland Boundary Fault. One of the most prominent exposures is along Loch Fad on Bute. Ailsa Craig, which lies some 25 kilometres (16 mi) south of Arran, has been quarried for a rare type of micro-granite containing riebeckite, known as \"Ailsite\". It is used by Kays of Scotland to make curling stones. (As of 2004, 60 to 70% of all curling stones in use globally were made from granite quarried on the island.)",
"title": "Geology and geography"
},
{
"paragraph_id": 4,
"text": "Like the rest of Scotland, the Firth of Clyde was covered by ice sheets during the Pleistocene ice ages, and the landscape has been much affected by glaciation. Back then, Arran's highest peaks may have been nunataks. Sea-level changes and the isostatic rise of land after the last retreat of the ice created clifflines behind raised beaches, which are a prominent feature of the entire coastline. The action of these forces has made charting the post glacial coastlines a complex task.",
"title": "Geology and geography"
},
{
"paragraph_id": 5,
"text": "The various soil types on the islands reflect their diverse geology. Bute has the most productive land, and it has a pattern of deposits that is typical of the southwest of Scotland. In the eroded valleys, there is a mixture of boulder clay and other glacial deposits. Elsewhere, especially to the south and west, there are raised beach- and marine deposits, which in some places, such as Stravanan, result in a machair landscape inland from the sandy bays.",
"title": "Geology and geography"
},
{
"paragraph_id": 6,
"text": "The Firth of Clyde, in which these islands lie, is north of the Irish Sea and has numerous branching inlets. Some of those inlets, including Loch Goil, Loch Long, Gare Loch, Loch Fyne, and the estuary of the River Clyde, have their own substantial features. In places, the effect of glaciation on the seabed is pronounced. For example, the Firth is 320 metres (1,050 ft) deep between Arran and Bute, even though they are only 8 kilometres (5.0 mi) apart. The islands all stand exposed to wind and tide. Various lighthouses, such as those on Ailsa Craig, Pladda, and Davaar, act as an aid to navigation.",
"title": "Geology and geography"
},
{
"paragraph_id": 7,
"text": "The Firth of Clyde lies between 55 and 56 degrees north latitude. This is the same latitude as Labrador in Canada and north of the Aleutian Islands. However, the influence of the North Atlantic Drift—the northern extension of the Gulf Stream—moderates the winter weather. As a result, the area enjoys a mild, damp oceanic climate. Temperatures are generally cool, averaging about 6 °C (43 °F) in January and 14 °C (57 °F) in July at sea level. Snow seldom lies at sea level, and frosts are generally less frequent than they are on the mainland. In common with most islands off the west coast of Scotland, the average annual rainfall is generally high: between 1,300 mm (51 in) on Bute, in the Cumbraes, and in the south of Arran, and 1,900 mm (75 in) in the north of Arran. The Arran mountains are even wetter: Their summits receive over 2,550 mm (100 in) of rain annually. May, June and July are the sunniest months: on average, there is a total of 200 hours of bright sunshine during that 3-month period each year. Southern Bute benefits from a particularly large number of sunny days.",
"title": "Climate"
},
{
"paragraph_id": 8,
"text": "Mesolithic humans arrived in the area of the Firth of Clyde during the 4th millennium BC, probably from Ireland. This initial arrival was followed by another wave of Neolithic peoples using the same route. In fact, there is some evidence that the Firth of Clyde was a significant route through which mainland Scotland was colonised during the Neolithic period. The inhabitants of Argyll, the Clyde estuary, and elsewhere in western Scotland at that time developed a distinctive style of megalithic structure that is known today as the Clyde cairns. About 100 of these structures have been found. They were used for interment of the dead. They are rectangular or trapezoidal, with a small enclosing chamber into which the person's body was placed. They are faced with large slabs of stone set on end (sometimes subdivided into smaller compartments). They also feature a forecourt area, which may have been used for displays or rituals associated with interment. They are mostly found in Arran, Bute, and Kintyre. It is thought likely that the Clyde cairns were the earliest forms of Neolithic monument constructed by incoming settlers. However, only a few of the cairns have been radiocarbon dated. A cairn at Monamore on Arran has been dated to 3160 BC, although other evidence suggests that it was almost certainly built earlier than that, possibly around 4000 BC. The area also features numerous standing stones dating from prehistoric times, including six stone circles on Machrie Moor in Arran, and other examples on Great Cumbrae and Bute.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Later, Bronze Age settlers also constructed megaliths at various sites. Many of them date from the 2nd millennium BC. However, instead of chambered cairns, these peoples constructed burial cists, which can be found, for example, on Inchmarnock. Evidence of settlement during this period, especially the early part of it, is scant. However, one notable artifact has been found on Bute that dates from around 2000 BC. Known today as the “Queen of the Inch necklace,” it is an article of jewellery made of lignite (commonly called “jet”).",
"title": "History"
},
{
"paragraph_id": 10,
"text": "During the early Iron Age, the Brythonic culture held sway. There is no evidence that the Roman occupation of southern Scotland extended into these islands.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Beginning in the 2nd century AD, Irish influence was at work in the region, and by the 6th century, Gaels had established the kingdom of Dál Riata there. Unlike earlier inhabitants, such as the P-Celtic speaking Brythons, these Gaels spoke a form of Gaelic (a modern version of which is still spoken today in the Hebrides). During this period, through the efforts of Saint Ninian and others, Christianity slowly supplanted Druidism. The kingdom of Dál Riata flourished from the rule of Fergus Mór in the late 5th century until the Viking incursions beginning in the late 8th century. Islands close to the shores of modern Ayrshire presumably remained part of the Kingdom of Strathclyde during this period, whilst the main islands became part of the emerging Kingdom of Alba founded by Kenneth MacAlpin (Cináed mac Ailpín).",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Beginning in the 9th century and into the 13th century, the Islands of the Clyde constituted a border zone between the Norse Suðreyjar and Scotland, and many of them were under Norse hegemony.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Beginning in the last half of the 12th century, and then into the early 1200s, the islands may well have served as the power base of Somhairle mac Giolla Brighde and his descendants. During this time, the islands seem to have come under the sway of the Steward of Scotland’s authority and to have been taken over by the expanding Stewart lordship.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "This western extension of Scottish authority appears to have been one of the factors motivating the Norwegian invasion of the region in 1230, during which the invaders seized Rothesay Castle.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1263, Norwegian troops commanded by Haakon Haakonarson repeated the feat, but the ensuing Battle of Largs between Scots and Norwegian forces, which took place on the shores of the Firth of Clyde, was inconclusive as a military contest.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "This battle marked an ultimately fatal weakening of Norwegian power in Scotland. Haakon retreated to Orkney, where he died in December 1263, consoled on his death bed by recitations of the old sagas. Following his death, under the 1266 Treaty of Perth, all rights that the Norwegian Crown \"had of old therein\" in relation to the islands were yielded to the Kingdom of Scotland.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Politically, from the conclusion of the Treaty of Perth in 1266 to the present day, all of the islands of the Clyde have been part of Scotland.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Ecclesiastically, beginning in the early medieval period all of these isles were part of the Diocese of Sodor and Man, based at Peel, on the Isle of Man. After 1387, the seat of the Bishopric of the Isles was relocated to the north, first to Snizort on Skye and then to Iona. This arrangement continued until the Scottish Reformation in the 16th century, when Scotland broke with the Catholic Church.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The mid-1700s marked the beginning of a century of significant change. New forms of transport, industry, and agriculture brought an end to ways of life that had endured for centuries. The Battle of Culloden in 1746 foreshadowed the end of the clan system. These changes improved living standards for some, but came at a cost for others.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In the late 18th and early 19th centuries, Alexander, the 10th Duke of Hamilton (1767–1852), and others implemented a controversial agricultural-reform programme called the Highland Clearances that had a devastating effect on many of Arran's inhabitants. Whole villages were emptied, and the Gaelic culture of the island was dealt a terminal blow. (A memorial to the tenant farmers evicted from the island by this programme was later erected on the shore at Lamlash, funded by a Canadian descendant of some of those evicted.)",
"title": "History"
},
{
"paragraph_id": 21,
"text": "From the 1850s to the late 20th century, cargo ships known as “Clyde Puffers” (made famous by an early-20th-century story collection called the Vital Spark), were the workhorses of the islands, carrying a great deal of produce and a great variety of products to and from the islands. In May 1889, the Caledonian Steam Packet Company (CSP) was founded and began operating steamer services to and from Gourock for the Caledonian Railway. The company soon expanded by taking over rival steamer operators. David MacBrayne operated the Glasgow-to-Ardrishaig steamer service, as part of the so-called \"Royal Route\" to Oban. During the 20th century, many of the islands were developed as tourist resorts along the lines of mainland resorts such as Largs and Troon, but catering for Glaswegians who preferred to holiday \"Doon the Watter\". In 1973, CSP and MacBraynes combined their Clyde and West Highland operations under the new name of Caledonian MacBrayne. A government-owned corporation, they serve Great Cumbrae, Arran, and Bute, and also run mainland-to-mainland ferries across the firth. Private companies operate services from Arran to Holy Isle, and from McInroy's Point (Gourock) to Hunter's Quay on the Cowal peninsula.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Politically, from 1890 to 1975, most of the islands comprised the traditional County of Bute, and its inhabitants were represented by the county council. Since the 1975 reorganization, however, the islands have been split more or less equally between two modern council authorities: Argyll and Bute, and North Ayrshire. Only Ailsa Craig and Lady Isle in South Ayrshire are not part of either of these two council areas.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Below is a table listing the nine islands of the Firth of Clyde that have an area greater than 40 hectares (approximately 100 acres), showing their population and listing the smaller uninhabited islets adjacent to them (including tidal islets separated only when the tide is higher, and skerries exposed only when the tide is lower).",
"title": "Islands"
},
{
"paragraph_id": 24,
"text": "As of 2001, six of the islands were inhabited, but that included one with only two residents (Davaar), and one with only one resident (Sanda). At the 2011 census, there was no one usually resident on either of these islands.",
"title": "Islands"
},
{
"paragraph_id": 25,
"text": "The islets that lie remote from the larger islands are described separately below.",
"title": "Islands"
},
{
"paragraph_id": 26,
"text": "There are two islets in Gare Loch: Green Island and Perch Rock. Gare Loch is small, but it hosts the Faslane Naval Base, where the UK's Trident nuclear submarines are located. At its southern end, the loch opens into the Firth of Clyde via the Rhu narrows.",
"title": "Islands"
},
{
"paragraph_id": 27,
"text": "There are also several islets in the Kilbrannan Sound, which lies between Arran and the Kintyre peninsula. They are: An Struthlag, Cour Island, Eilean Carrach (Carradale), Eilean Carrach (Skipness), Eilean Grianain, Eilean Sunadale, Gull Isle, Island Ross and Thorn Isle.",
"title": "Islands"
},
{
"paragraph_id": 28,
"text": "(The Norse sagas tell a story about the Kintyre peninsula. In the late 11th century, a king of Norway (Magnus Barefoot) devised a plan to increase his territorial possessions. He persuaded a king of Scotland (Malcolm III or Edgar) to agree that he could take possession of an area of land on the west coast of Scotland if a ship could sail around it. Magnus then arranged for one of his longships to be dragged across the 1.5 kilometres (0.93 mi)-long isthmus at the northern tip of the Kintyre peninsula, which connects Kintyre to the mainland. (The isthmus lies between East Loch Tarbert and West Loch Tarbert). He took command of the ship's tiller himself. Then, declaring that Kintyre had \"better land than the best of the Hebrides\", he claimed that dragging his ship across the isthmus had been equivalent to “sailing around” the peninsula, and thus that the peninsula counted as “land around which a ship could sail.” As a result of this maneuver, he was able to claim possession of the peninsula, which remained under Norse rule for more than a dozen years.)",
"title": "Islands"
},
{
"paragraph_id": 29,
"text": "There are also several islets and skerries in Loch Fyne, which extends 65 kilometres (40 mi) inland from the Sound of Bute, and is the longest of Scotland's sea lochs. They are: Duncuan Island, Eilean Ardgaddan, Eilean a' Bhuic, Eilean Aoghainn, Eilean a' Chomhraig, Eilean an Dúnain, Eilean Buidhe (Ardmarnock), Eilean Buidhe (Portavadie), Eilean Fraoch, Eilean Math-ghamhna, Eilean Mór, Glas Eilean, Heather Island, Inverneil Island, Kilbride Island, and Liath Eilean.",
"title": "Islands"
},
{
"paragraph_id": 30,
"text": "There are several islets surrounding Horse Isle in North Ayrshire: Broad Rock, East Islet, Halftide Rock, High Rock and North Islet.",
"title": "Islands"
},
{
"paragraph_id": 31,
"text": "Lady Isle lies off the South Ayrshire coast near Troon. At one time it housed \"ane old chapell with an excellent spring of water\". However, in June 1821, someone set fire to the \"turf and pasture\". Once the pasture had burned away, gales blew much of the island's soil into the sea. This permanently destroyed the island's ability to support grazing.",
"title": "Islands"
},
{
"paragraph_id": 32,
"text": "There are no islands in Loch Goil or Loch Long, which are fjord-like arms in the northern part of the firth.",
"title": "Islands"
},
{
"paragraph_id": 33,
"text": "Here is a list of places along that shores of the Firth of Clyde that are not islands, but have names that misleadingly suggest they are islands (eilean being Gaelic for \"island\"): Eilean na Beithe, Portavadie; Eilean Beag, Cove; Eilean Dubh, Dalchenna, Loch Fyne; Eilean nan Gabhar, Melldalloch, Kyles of Bute; Barmore Island, just north of Tarbert, Kintyre; Eilean Aoidh, south of Portavadie; Eilean Leathan, Kilbrannan Sound just south of Torrisdale Bay; Island Muller, Kilbrannan Sound north of Campbeltown.",
"title": "Islands"
},
{
"paragraph_id": 34,
"text": "Around the Firth of Clyde, there are populations of red deer, red squirrel, badger, otter, adder, and common lizard. In the Firth itself, there are harbour porpoises, basking sharks and various species of dolphin. Davaar is home to a population of wild goats.",
"title": "Natural history"
},
{
"paragraph_id": 35,
"text": "Over 200 bird species have been recorded as sighted in the area, including the black guillemot, the eider, the peregrine falcon, and the golden eagle. In 1981, there were 28 ptarmigans sighted on Arran, but in 2009 it was reported that extensive surveys had been unable to find any recorded ptarmigans sightings. Similarly, the red-billed chough no longer breeds on the island.",
"title": "Natural history"
},
{
"paragraph_id": 36,
"text": "Arran has three species of the rare endemic trees known as Arran Whitebeams: the Scottish or Arran whitebeam; the cut-leaved whitebeam; and the Catacol whitebeam. All of them are found only in Gleann Diomhan, and they are amongst the most endangered tree species in the world. (Gleann Diomhan was formerly part of a designated national nature reserve—the designation was removed in 2011)- and it continues to be part of an area designated as a Site of Special Scientific Interest.) Only 283 Arran whitebeam and 236 cut-leaved whitebeam were recorded as mature trees in 1980, and it is thought that grazing pressures and insect damage are preventing regeneration of the woodland. The Catacol whitebeam was discovered in 2007, but only two specimens have been found, so steps have been taken to protect them.",
"title": "Natural history"
},
{
"paragraph_id": 37,
"text": "The Roman historian Tacitus refers to the Clota, meaning the Clyde. The derivation is not certain but is probably from the Brythonic Clouta, which became Clut in Old Welsh. The name literally means \"wash\", probably referring to a river goddess who is seen as \"the washer\" or \"the strongly flowing one\". The derivation of the word “Bute” is also uncertain. The Norse name for it is Bót an Old Irish word for \"fire\", which might be a reference to signal fires. The etymology of “Arran” is no clearer. Haswell-Smith (2004) suggests that it derive from a Brythonic word meaning \"high place\", although Watson (1926) suggests it may be pre-Celtic.",
"title": "Etymology"
},
{
"paragraph_id": 38,
"text": "",
"title": "References"
}
]
| The Islands of the Firth of Clyde are the fifth largest of the major Scottish island groups after the Inner and Outer Hebrides, Orkney and Shetland. They are situated in the Firth of Clyde between Ayrshire and Argyll and Bute. There are about forty islands and skerries. Only four are inhabited, and only nine are larger than 40 hectares. The largest and most populous are Arran and Bute. They are served by dedicated ferry routes, as are Great Cumbrae and Holy Island. Unlike the isles in the four larger Scottish archipelagos, none of the isles in this group are connected to one another or to the mainland by bridges. The geology and geomorphology of the area is complex, and the islands and the surrounding sea lochs each have distinctive features. The influence of the Atlantic Ocean and the North Atlantic Drift create a mild, damp oceanic climate. There is a diversity of wildlife, including three species of rare endemic trees. The larger islands have been continuously inhabited since Neolithic times. The cultures of their inhabitants were influenced by the emergence of the kingdom of Dál Riata, beginning in 500 AD. The islands were then politically absorbed into the emerging kingdom of Alba, led by Kenneth MacAlpin. During the early Middle Ages, the islands experienced Viking incursions. In the 13th century, they became part of the Kingdom of Scotland. | 2001-11-17T23:57:49Z | 2023-11-20T05:18:13Z | [
"Template:Islands of the Clyde",
"Template:British Isles",
"Template:Main",
"Template:Reflist",
"Template:NRS1C",
"Template:Cite book",
"Template:Refend",
"Template:Short description",
"Template:Webarchive",
"Template:Cite web",
"Template:Refbegin",
"Template:Gaelic Placenames",
"Template:Islands of Scotland",
"Template:Use dmy dates",
"Template:GRO10",
"Template:Cite journal",
"Template:ISSN",
"Template:ISBN",
"Template:Convert",
"Template:Portal",
"Template:Haswell-Smith",
"Template:Good article"
]
| https://en.wikipedia.org/wiki/Islands_of_the_Clyde |
15,253 | International Bank Account Number | The International Bank Account Number (IBAN) is an internationally agreed upon system of identifying bank accounts across national borders to facilitate the communication and processing of cross border transactions with a reduced risk of transcription errors. An IBAN uniquely identifies the account of a customer at a financial institution. It was originally adopted by the European Committee for Banking Standards (ECBS) and since 1997 as the international standard ISO 13616 under the International Organization for Standardization (ISO). The current version is ISO 13616:2020, which indicates the Society for Worldwide Interbank Financial Telecommunication (SWIFT) as the formal registrar. Initially developed to facilitate payments within the European Union, it has been implemented by most European countries and numerous countries in other parts of the world, mainly in the Middle East and the Caribbean. As of July 2023, 86 countries were using the IBAN numbering system.
The IBAN consists of up to 34 alphanumeric characters comprising a country code; two check digits; and a number that includes the domestic bank account number, branch identifier, and potential routing information. The check digits enable a check of the bank account number to confirm its integrity before submitting a transaction.
Before IBAN, differing national standards for bank account identification (i.e. bank, branch, routing codes, and account number) were confusing for some users. This often led to necessary routing information being missing from payments. Routing information as specified by ISO 9362 (also known as Business Identifier Codes (BIC), SWIFT ID or SWIFT code, and SWIFT-BIC) does not require a specific format for the transaction so the identification of accounts and transaction types is left to agreements of the transaction partners. It also does not contain check digits, so errors of transcription were not detectable and it was not possible for a sending bank to validate the routing information prior to submitting the payment. Routing errors caused delayed payments and incurred extra costs to the sending and receiving banks and often to intermediate routing banks.
In 1997, to overcome these difficulties, the International Organization for Standardization (ISO) published ISO 13616:1997. This proposal had a degree of flexibility that the European Committee for Banking Standards (ECBS) believed would make it unworkable, and they produced a "slimmed down" version of the standard which, amongst other things, permitted only upper-case letters and required that the IBAN for each country have a fixed length. ISO 13616:1997 was subsequently withdrawn and replaced by ISO 13616:2003. The standard was revised again in 2007 when it was split into two parts. ISO 13616-1:2007 "specifies the elements of an international bank account number (IBAN) used to facilitate the processing of data internationally in data interchange, in financial environments as well as within and between other industries" but "does not specify internal procedures, file organization techniques, storage media, languages, etc. to be used in its implementation". ISO 13616-2:2007 describes "the Registration Authority (RA) responsible for the registry of IBAN formats that are compliant with ISO 13616-1 [and] the procedures for registering ISO 13616-compliant IBAN formats". The official IBAN registrar under ISO 13616-2:2007 is SWIFT.
IBAN imposes a flexible but regular format sufficient for account identification and contains validation information to avoid errors of transcription. It carries all the routing information needed to get a payment from one bank to another wherever it may be; it contains key bank account details such as country code, branch codes (known as sort codes in the UK and Ireland) and account numbers, and it contains check digits which can be validated at source according to a single standard procedure. Where used, IBANs have reduced trans-national money transfer errors to under 0.1% of total payments
The IBAN consists of up to 34 alphanumeric characters, as follows:
The check digits represent the checksum of the bank account number which is used by banking systems to confirm that the number contains no simple errors.
In order to facilitate reading by humans, IBANs are traditionally expressed in groups of four characters separated by spaces, the last group being of variable length as shown in the example below; when transmitted electronically however spaces are omitted.
Permitted IBAN characters are the digits 0 to 9 and the 26 Latin alphabetic characters A to Z. This applies even in countries where these characters are not used in the national language (e.g. Greece).
The Basic Bank Account Number (BBAN) format is decided by the national central bank or designated payment authority of each country. There is no consistency between the formats adopted. The national authority may register its BBAN format with SWIFT but is not obliged to do so. It may adopt IBAN without registration. SWIFT also acts as the registration authority for the SWIFT system, which is used by most countries that have not adopted IBAN. A major difference between the two systems is that under SWIFT there is no requirement that BBANs used within a country be of a pre-defined length.
The BBAN must be of a fixed length for the country and comprise case-insensitive alphanumeric characters. It includes the domestic bank account number, branch identifier, and potential routing information. Each country can have a different national routing/account numbering system, up to a maximum of 30 alphanumeric characters.
The check digits enable the sending bank (or its customer) to perform a sanity check of the routing destination and account number from a single string of data at the time of data entry. This check is guaranteed to detect any instances where a single character has been omitted, duplicated, mistyped or where two characters have been transposed. Thus routing and account number errors are virtually eliminated.
One of the design aims of the IBAN was to enable as much validation as possible to be done at the point of data entry. In particular, the computer program that accepts an IBAN will be able to validate:
The check digits are calculated using MOD-97-10 as per ISO/IEC 7064:2003 (abbreviated to mod-97 in this article), which specifies a set of check character systems capable of protecting strings against errors which occur when people copy or key data. In particular, the standard states that the following can be detected:
The underlying rules for IBANs is that the account-servicing financial institution should issue an IBAN, as there are a number of areas where different IBANs could be generated from the same account and branch numbers that would satisfy the generic IBAN validation rules. In particular cases where 00 is a valid check digit, 97 will not be a valid check digit, likewise, if 01 is a valid check digit, 98 will not be a valid check digit, similarly with 02 and 99.
The UN CEFACT TBG5 has published a free IBAN validation service in 32 languages for all 57 countries that have adopted the IBAN standard. They have also published the Javascript source code of the verification algorithm.
An English language IBAN checker for ECBS member country bank accounts is available on its website.
An IBAN is validated by converting it into an integer and performing a basic mod-97 operation (as described in ISO 7064) on it. If the IBAN is valid, the remainder equals 1. The algorithm of IBAN validation is as follows:
If the remainder is 1, the check digit test is passed and the IBAN might be valid.
Example (fictitious United Kingdom bank, sort code 12-34-56, account number 98765432):
According to the ECBS "generation of the IBAN shall be the exclusive responsibility of the bank/branch servicing the account". The ECBS document replicates part of the ISO/IEC 7064:2003 standard as a method for generating check digits in the range 02 to 98. Check digits in the ranges 00 to 96, 01 to 97, and 03 to 99 will also provide validation of an IBAN, but the standard is silent as to whether or not these ranges may be used.
The preferred algorithm is:
Any computer programming language or software package that is used to compute D mod 97 directly must have the ability to handle integers of more than 30 digits. In practice, this can only be done by software that either supports arbitrary-precision arithmetic or that can handle 219-bit (unsigned) integers, features that are often not standard. If the application software in use does not provide the ability to handle integers of this size, the modulo operation can be performed in a piece-wise manner (as is the case with the UN CEFACT TBG5 JavaScript program).
Piece-wise calculation D mod 97 can be done in many ways. One such way is as follows:
The result of the final calculation in step 2 will be D mod 97 = N mod 97.
In this example, the above algorithm for D mod 97 will be applied to D = 3214282912345698765432161182. (The digits are colour-coded to aid the description below.) If the result is one, the IBAN corresponding to D passes the check digit test.
From step 8, the final result is D mod 97 = 1 and the IBAN has passed this check digit test.
In addition to the IBAN check digits, many countries have their own national check digits used within the BBAN, as part of their national account number formats. Each country determines its own algorithm used for assigning and validating the national check digits - some relying on international standards, some inventing their own national standard, and some allowing each bank to decide if or how to implement them. Some algorithms apply to the entire BBAN, and others to one or more of the fields within it. The check digits may be considered an integral part of the account number, or an external field separate from the account number, depending on the country's rules.
Most of the variations used are based on two categories of algorithms:
- ISO 7064 MOD-97-10: Treat the account number as a large integer, divide it by 97 and use the remainder or its complement as the check digit(s).
- Weighted sum: Treat the account number as a series of individual numbers, multiply each number by a weight value according to its position in the string, sum the products, divide the sum by a modulus (usually 10 or 11) and use the remainder or its complement as the check digit.
In both cases, there may first be a translation from alphanumeric characters to numbers using conversion tables. The complement, if used, means the remainder r is subtracted from a fixed value, usually the modulus or the modulus plus one (with the common exception that a remainder of 0 results in 0, denoted as 0 → 0,as opposed to e.g. 0 → 97 meaning that if the reminder is zero the checksum is 97). Note that some national specifications define the weights order from right to left, but since the BBAN length in the IBAN is fixed, they can be used from left to right as well.
International bank transactions use either an IBAN or the ISO 9362 Business Identifier Code system (BIC or SWIFT code) in conjunction with the BBAN (Basic Bank Account Number).
The banks of most countries in Europe publish account numbers using both the IBAN format and the nationally recognised identifiers, this being mandatory within the European Economic Area.
Day-to-day administration of banking in British Overseas Territories varies from territory to territory; some, such as South Georgia and the South Sandwich Islands, have too small a population to warrant a banking system while others, such as Bermuda, have a thriving financial sector. The use of the IBAN is up to the local government—Gibraltar, formerly part of the European Union is required to use the IBAN, as are the Crown Dependencies, which use the British clearing system, and the British Virgin Islands have chosen to do so. As of April 2013, no other British Overseas Territories have chosen to use the IBAN. Banks in the Caribbean Netherlands also do not use the IBAN.
The IBAN designation scheme was chosen as the foundation for electronic straight-through processing in the European Economic Area. The European Parliament mandated that a bank charge needs to be the same amount for domestic credit transfers as for cross-border credit transfers regulated in decision 2560/2001 (updated in 924/2009). This regulation took effect in 2003. Only payments in euro up to €12,500 to a bank account designated by its IBAN were covered by the regulation, not payments in other currencies.
The Euro Payments regulation was the foundation for the decision to create a Single Euro Payments Area (SEPA). The European Central Bank has created the TARGET2 interbank network that unifies the technical infrastructure of the 26 central banks of the European Union (although Sweden has opted out). SEPA is a self-regulatory initiative by the banking sector of Europe as represented in the European Payments Council (EPC). The European Union made the scheme mandatory through the Payment Services Directive published in 2007. Since January 2008, all countries were required to support SEPA credit transfer, and SEPA direct debit was required to be supported since November 2009. The regulation on SEPA payments increased the charge cap (same price for domestic payments as for cross-border payments) to €50,000.
With a further decision of the European Parliament, the IBAN scheme for bank accounts fully replaced the domestic numbering schemes from 31 December 2012. On 16 December 2010, the European Commission published regulations that made IBAN support mandatory for domestic credit transfer by 2013 and for domestic direct debit by 2014 (with a 12 and 24 months transition period respectively). Some countries had already replaced their traditional bank account scheme by IBAN. This included Switzerland where IBAN was introduced for national credit transfer on 1 January 2006 and the support for the old bank account numbers was not required from 1 January 2010.
Based on a 20 December 2011 memorandum, the EU parliament resolved the mandatory dates for the adoption of the IBAN on 14 February 2012. On 1 February 2014, all national systems for credit transfer and direct debit were abolished and replaced by an IBAN-based system. This was then extended to all cross-border SEPA transactions on 1 February 2016 (Article 5 Section 7). After these dates the IBAN is sufficient to identify an account for home and foreign financial transactions in SEPA countries and banks are no longer permitted to require that the customer supply the BIC for the beneficiary's bank.
In the run-up to the 1 February 2014 deadline, it became apparent that many old bank account numbers had not been allocated IBANs—an issue that was addressed on a country-by-country basis. In Germany, for example, Deutsche Bundesbank and the German Banking Industry Committee required that all holders of German bank codes ("Bankleitzahl") published the specifics of their IBAN generation format taking into account not only the generation of check digits but also the handling of legacy bank codes, thereby enabling third parties to generate IBANs independently of the bank. The first such catalogue was published in June 2013 as a variant of the old bank code catalog ("Bankleitzahlendatei").
Banks in numerous non-European countries including most states of the Middle East, North Africa and the Caribbean have implemented the IBAN format for account identification. In some countries the IBAN is used on an ad hoc basis, an example was Ukraine where account numbers used for international transfers by some domestic banks had additional aliases that followed the IBAN format as a precursor to formal SWIFT registration. This practice in Ukraine ended on 1 November 2019 when all Ukrainian banks had fully switched to the IBAN standard.
The degree to which a bank verifies the validity of a recipient's bank account number depends on the configuration of the transmitting bank's software—many major software packages supply bank account validation as a standard function. Some banks outside Europe may not recognize IBAN, though this is expected to diminish with time. Non-European banks usually accept IBANs for accounts in Europe, although they might not treat IBANs differently from other foreign bank account numbers. In particular, they might not check the IBAN's validity prior to sending the transfer.
Banks in the United States do not use IBAN as account numbers for U.S. accounts and use ABA routing transit numbers. Any adoption of the IBAN standard by U.S. banks would likely be initiated by ANSI ASC X9, the U.S. financial services standards development organization: a working group (X9B20) was established as an X9 subcommittee to generate an IBAN construction for U.S. bank accounts.
Canadian financial institutions have not adopted IBAN and use routing numbers issued by Payments Canada for domestic transfers, and SWIFT for international transfers. There is no formal governmental or private sector regulatory requirement in Canada for the major banks to use IBAN.
Australia and New Zealand do not use IBAN. They use Bank State Branch codes for domestic transfers and SWIFT for international transfers.
This table summarises the IBAN formats by country:
In addition to the above, the IBAN is under development in countries below but has not yet been catalogued for general international use.
In this list | [
{
"paragraph_id": 0,
"text": "The International Bank Account Number (IBAN) is an internationally agreed upon system of identifying bank accounts across national borders to facilitate the communication and processing of cross border transactions with a reduced risk of transcription errors. An IBAN uniquely identifies the account of a customer at a financial institution. It was originally adopted by the European Committee for Banking Standards (ECBS) and since 1997 as the international standard ISO 13616 under the International Organization for Standardization (ISO). The current version is ISO 13616:2020, which indicates the Society for Worldwide Interbank Financial Telecommunication (SWIFT) as the formal registrar. Initially developed to facilitate payments within the European Union, it has been implemented by most European countries and numerous countries in other parts of the world, mainly in the Middle East and the Caribbean. As of July 2023, 86 countries were using the IBAN numbering system.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The IBAN consists of up to 34 alphanumeric characters comprising a country code; two check digits; and a number that includes the domestic bank account number, branch identifier, and potential routing information. The check digits enable a check of the bank account number to confirm its integrity before submitting a transaction.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Before IBAN, differing national standards for bank account identification (i.e. bank, branch, routing codes, and account number) were confusing for some users. This often led to necessary routing information being missing from payments. Routing information as specified by ISO 9362 (also known as Business Identifier Codes (BIC), SWIFT ID or SWIFT code, and SWIFT-BIC) does not require a specific format for the transaction so the identification of accounts and transaction types is left to agreements of the transaction partners. It also does not contain check digits, so errors of transcription were not detectable and it was not possible for a sending bank to validate the routing information prior to submitting the payment. Routing errors caused delayed payments and incurred extra costs to the sending and receiving banks and often to intermediate routing banks.",
"title": "Background"
},
{
"paragraph_id": 3,
"text": "In 1997, to overcome these difficulties, the International Organization for Standardization (ISO) published ISO 13616:1997. This proposal had a degree of flexibility that the European Committee for Banking Standards (ECBS) believed would make it unworkable, and they produced a \"slimmed down\" version of the standard which, amongst other things, permitted only upper-case letters and required that the IBAN for each country have a fixed length. ISO 13616:1997 was subsequently withdrawn and replaced by ISO 13616:2003. The standard was revised again in 2007 when it was split into two parts. ISO 13616-1:2007 \"specifies the elements of an international bank account number (IBAN) used to facilitate the processing of data internationally in data interchange, in financial environments as well as within and between other industries\" but \"does not specify internal procedures, file organization techniques, storage media, languages, etc. to be used in its implementation\". ISO 13616-2:2007 describes \"the Registration Authority (RA) responsible for the registry of IBAN formats that are compliant with ISO 13616-1 [and] the procedures for registering ISO 13616-compliant IBAN formats\". The official IBAN registrar under ISO 13616-2:2007 is SWIFT.",
"title": "Background"
},
{
"paragraph_id": 4,
"text": "IBAN imposes a flexible but regular format sufficient for account identification and contains validation information to avoid errors of transcription. It carries all the routing information needed to get a payment from one bank to another wherever it may be; it contains key bank account details such as country code, branch codes (known as sort codes in the UK and Ireland) and account numbers, and it contains check digits which can be validated at source according to a single standard procedure. Where used, IBANs have reduced trans-national money transfer errors to under 0.1% of total payments",
"title": "Background"
},
{
"paragraph_id": 5,
"text": "The IBAN consists of up to 34 alphanumeric characters, as follows:",
"title": "Structure"
},
{
"paragraph_id": 6,
"text": "The check digits represent the checksum of the bank account number which is used by banking systems to confirm that the number contains no simple errors.",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "In order to facilitate reading by humans, IBANs are traditionally expressed in groups of four characters separated by spaces, the last group being of variable length as shown in the example below; when transmitted electronically however spaces are omitted.",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "Permitted IBAN characters are the digits 0 to 9 and the 26 Latin alphabetic characters A to Z. This applies even in countries where these characters are not used in the national language (e.g. Greece).",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "The Basic Bank Account Number (BBAN) format is decided by the national central bank or designated payment authority of each country. There is no consistency between the formats adopted. The national authority may register its BBAN format with SWIFT but is not obliged to do so. It may adopt IBAN without registration. SWIFT also acts as the registration authority for the SWIFT system, which is used by most countries that have not adopted IBAN. A major difference between the two systems is that under SWIFT there is no requirement that BBANs used within a country be of a pre-defined length.",
"title": "Structure"
},
{
"paragraph_id": 10,
"text": "The BBAN must be of a fixed length for the country and comprise case-insensitive alphanumeric characters. It includes the domestic bank account number, branch identifier, and potential routing information. Each country can have a different national routing/account numbering system, up to a maximum of 30 alphanumeric characters.",
"title": "Structure"
},
{
"paragraph_id": 11,
"text": "The check digits enable the sending bank (or its customer) to perform a sanity check of the routing destination and account number from a single string of data at the time of data entry. This check is guaranteed to detect any instances where a single character has been omitted, duplicated, mistyped or where two characters have been transposed. Thus routing and account number errors are virtually eliminated.",
"title": "Structure"
},
{
"paragraph_id": 12,
"text": "One of the design aims of the IBAN was to enable as much validation as possible to be done at the point of data entry. In particular, the computer program that accepts an IBAN will be able to validate:",
"title": "Processing"
},
{
"paragraph_id": 13,
"text": "The check digits are calculated using MOD-97-10 as per ISO/IEC 7064:2003 (abbreviated to mod-97 in this article), which specifies a set of check character systems capable of protecting strings against errors which occur when people copy or key data. In particular, the standard states that the following can be detected:",
"title": "Processing"
},
{
"paragraph_id": 14,
"text": "The underlying rules for IBANs is that the account-servicing financial institution should issue an IBAN, as there are a number of areas where different IBANs could be generated from the same account and branch numbers that would satisfy the generic IBAN validation rules. In particular cases where 00 is a valid check digit, 97 will not be a valid check digit, likewise, if 01 is a valid check digit, 98 will not be a valid check digit, similarly with 02 and 99.",
"title": "Processing"
},
{
"paragraph_id": 15,
"text": "The UN CEFACT TBG5 has published a free IBAN validation service in 32 languages for all 57 countries that have adopted the IBAN standard. They have also published the Javascript source code of the verification algorithm.",
"title": "Processing"
},
{
"paragraph_id": 16,
"text": "An English language IBAN checker for ECBS member country bank accounts is available on its website.",
"title": "Processing"
},
{
"paragraph_id": 17,
"text": "",
"title": "Processing"
},
{
"paragraph_id": 18,
"text": "An IBAN is validated by converting it into an integer and performing a basic mod-97 operation (as described in ISO 7064) on it. If the IBAN is valid, the remainder equals 1. The algorithm of IBAN validation is as follows:",
"title": "Processing"
},
{
"paragraph_id": 19,
"text": "If the remainder is 1, the check digit test is passed and the IBAN might be valid.",
"title": "Processing"
},
{
"paragraph_id": 20,
"text": "Example (fictitious United Kingdom bank, sort code 12-34-56, account number 98765432):",
"title": "Processing"
},
{
"paragraph_id": 21,
"text": "According to the ECBS \"generation of the IBAN shall be the exclusive responsibility of the bank/branch servicing the account\". The ECBS document replicates part of the ISO/IEC 7064:2003 standard as a method for generating check digits in the range 02 to 98. Check digits in the ranges 00 to 96, 01 to 97, and 03 to 99 will also provide validation of an IBAN, but the standard is silent as to whether or not these ranges may be used.",
"title": "Processing"
},
{
"paragraph_id": 22,
"text": "The preferred algorithm is:",
"title": "Processing"
},
{
"paragraph_id": 23,
"text": "Any computer programming language or software package that is used to compute D mod 97 directly must have the ability to handle integers of more than 30 digits. In practice, this can only be done by software that either supports arbitrary-precision arithmetic or that can handle 219-bit (unsigned) integers, features that are often not standard. If the application software in use does not provide the ability to handle integers of this size, the modulo operation can be performed in a piece-wise manner (as is the case with the UN CEFACT TBG5 JavaScript program).",
"title": "Processing"
},
{
"paragraph_id": 24,
"text": "Piece-wise calculation D mod 97 can be done in many ways. One such way is as follows:",
"title": "Processing"
},
{
"paragraph_id": 25,
"text": "The result of the final calculation in step 2 will be D mod 97 = N mod 97.",
"title": "Processing"
},
{
"paragraph_id": 26,
"text": "In this example, the above algorithm for D mod 97 will be applied to D = 3214282912345698765432161182. (The digits are colour-coded to aid the description below.) If the result is one, the IBAN corresponding to D passes the check digit test.",
"title": "Processing"
},
{
"paragraph_id": 27,
"text": "From step 8, the final result is D mod 97 = 1 and the IBAN has passed this check digit test.",
"title": "Processing"
},
{
"paragraph_id": 28,
"text": "In addition to the IBAN check digits, many countries have their own national check digits used within the BBAN, as part of their national account number formats. Each country determines its own algorithm used for assigning and validating the national check digits - some relying on international standards, some inventing their own national standard, and some allowing each bank to decide if or how to implement them. Some algorithms apply to the entire BBAN, and others to one or more of the fields within it. The check digits may be considered an integral part of the account number, or an external field separate from the account number, depending on the country's rules.",
"title": "Processing"
},
{
"paragraph_id": 29,
"text": "Most of the variations used are based on two categories of algorithms:",
"title": "Processing"
},
{
"paragraph_id": 30,
"text": "- ISO 7064 MOD-97-10: Treat the account number as a large integer, divide it by 97 and use the remainder or its complement as the check digit(s).",
"title": "Processing"
},
{
"paragraph_id": 31,
"text": "- Weighted sum: Treat the account number as a series of individual numbers, multiply each number by a weight value according to its position in the string, sum the products, divide the sum by a modulus (usually 10 or 11) and use the remainder or its complement as the check digit.",
"title": "Processing"
},
{
"paragraph_id": 32,
"text": "In both cases, there may first be a translation from alphanumeric characters to numbers using conversion tables. The complement, if used, means the remainder r is subtracted from a fixed value, usually the modulus or the modulus plus one (with the common exception that a remainder of 0 results in 0, denoted as 0 → 0,as opposed to e.g. 0 → 97 meaning that if the reminder is zero the checksum is 97). Note that some national specifications define the weights order from right to left, but since the BBAN length in the IBAN is fixed, they can be used from left to right as well.",
"title": "Processing"
},
{
"paragraph_id": 33,
"text": "International bank transactions use either an IBAN or the ISO 9362 Business Identifier Code system (BIC or SWIFT code) in conjunction with the BBAN (Basic Bank Account Number).",
"title": "Adoption"
},
{
"paragraph_id": 34,
"text": "The banks of most countries in Europe publish account numbers using both the IBAN format and the nationally recognised identifiers, this being mandatory within the European Economic Area.",
"title": "Adoption"
},
{
"paragraph_id": 35,
"text": "Day-to-day administration of banking in British Overseas Territories varies from territory to territory; some, such as South Georgia and the South Sandwich Islands, have too small a population to warrant a banking system while others, such as Bermuda, have a thriving financial sector. The use of the IBAN is up to the local government—Gibraltar, formerly part of the European Union is required to use the IBAN, as are the Crown Dependencies, which use the British clearing system, and the British Virgin Islands have chosen to do so. As of April 2013, no other British Overseas Territories have chosen to use the IBAN. Banks in the Caribbean Netherlands also do not use the IBAN.",
"title": "Adoption"
},
{
"paragraph_id": 36,
"text": "The IBAN designation scheme was chosen as the foundation for electronic straight-through processing in the European Economic Area. The European Parliament mandated that a bank charge needs to be the same amount for domestic credit transfers as for cross-border credit transfers regulated in decision 2560/2001 (updated in 924/2009). This regulation took effect in 2003. Only payments in euro up to €12,500 to a bank account designated by its IBAN were covered by the regulation, not payments in other currencies.",
"title": "Adoption"
},
{
"paragraph_id": 37,
"text": "The Euro Payments regulation was the foundation for the decision to create a Single Euro Payments Area (SEPA). The European Central Bank has created the TARGET2 interbank network that unifies the technical infrastructure of the 26 central banks of the European Union (although Sweden has opted out). SEPA is a self-regulatory initiative by the banking sector of Europe as represented in the European Payments Council (EPC). The European Union made the scheme mandatory through the Payment Services Directive published in 2007. Since January 2008, all countries were required to support SEPA credit transfer, and SEPA direct debit was required to be supported since November 2009. The regulation on SEPA payments increased the charge cap (same price for domestic payments as for cross-border payments) to €50,000.",
"title": "Adoption"
},
{
"paragraph_id": 38,
"text": "With a further decision of the European Parliament, the IBAN scheme for bank accounts fully replaced the domestic numbering schemes from 31 December 2012. On 16 December 2010, the European Commission published regulations that made IBAN support mandatory for domestic credit transfer by 2013 and for domestic direct debit by 2014 (with a 12 and 24 months transition period respectively). Some countries had already replaced their traditional bank account scheme by IBAN. This included Switzerland where IBAN was introduced for national credit transfer on 1 January 2006 and the support for the old bank account numbers was not required from 1 January 2010.",
"title": "Adoption"
},
{
"paragraph_id": 39,
"text": "Based on a 20 December 2011 memorandum, the EU parliament resolved the mandatory dates for the adoption of the IBAN on 14 February 2012. On 1 February 2014, all national systems for credit transfer and direct debit were abolished and replaced by an IBAN-based system. This was then extended to all cross-border SEPA transactions on 1 February 2016 (Article 5 Section 7). After these dates the IBAN is sufficient to identify an account for home and foreign financial transactions in SEPA countries and banks are no longer permitted to require that the customer supply the BIC for the beneficiary's bank.",
"title": "Adoption"
},
{
"paragraph_id": 40,
"text": "In the run-up to the 1 February 2014 deadline, it became apparent that many old bank account numbers had not been allocated IBANs—an issue that was addressed on a country-by-country basis. In Germany, for example, Deutsche Bundesbank and the German Banking Industry Committee required that all holders of German bank codes (\"Bankleitzahl\") published the specifics of their IBAN generation format taking into account not only the generation of check digits but also the handling of legacy bank codes, thereby enabling third parties to generate IBANs independently of the bank. The first such catalogue was published in June 2013 as a variant of the old bank code catalog (\"Bankleitzahlendatei\").",
"title": "Adoption"
},
{
"paragraph_id": 41,
"text": "Banks in numerous non-European countries including most states of the Middle East, North Africa and the Caribbean have implemented the IBAN format for account identification. In some countries the IBAN is used on an ad hoc basis, an example was Ukraine where account numbers used for international transfers by some domestic banks had additional aliases that followed the IBAN format as a precursor to formal SWIFT registration. This practice in Ukraine ended on 1 November 2019 when all Ukrainian banks had fully switched to the IBAN standard.",
"title": "Adoption"
},
{
"paragraph_id": 42,
"text": "The degree to which a bank verifies the validity of a recipient's bank account number depends on the configuration of the transmitting bank's software—many major software packages supply bank account validation as a standard function. Some banks outside Europe may not recognize IBAN, though this is expected to diminish with time. Non-European banks usually accept IBANs for accounts in Europe, although they might not treat IBANs differently from other foreign bank account numbers. In particular, they might not check the IBAN's validity prior to sending the transfer.",
"title": "Adoption"
},
{
"paragraph_id": 43,
"text": "Banks in the United States do not use IBAN as account numbers for U.S. accounts and use ABA routing transit numbers. Any adoption of the IBAN standard by U.S. banks would likely be initiated by ANSI ASC X9, the U.S. financial services standards development organization: a working group (X9B20) was established as an X9 subcommittee to generate an IBAN construction for U.S. bank accounts.",
"title": "Adoption"
},
{
"paragraph_id": 44,
"text": "Canadian financial institutions have not adopted IBAN and use routing numbers issued by Payments Canada for domestic transfers, and SWIFT for international transfers. There is no formal governmental or private sector regulatory requirement in Canada for the major banks to use IBAN.",
"title": "Adoption"
},
{
"paragraph_id": 45,
"text": "Australia and New Zealand do not use IBAN. They use Bank State Branch codes for domestic transfers and SWIFT for international transfers.",
"title": "Adoption"
},
{
"paragraph_id": 46,
"text": "This table summarises the IBAN formats by country:",
"title": "Adoption"
},
{
"paragraph_id": 47,
"text": "In addition to the above, the IBAN is under development in countries below but has not yet been catalogued for general international use.",
"title": "Adoption"
},
{
"paragraph_id": 48,
"text": "In this list",
"title": "Adoption"
}
]
| The International Bank Account Number (IBAN) is an internationally agreed upon system of identifying bank accounts across national borders to facilitate the communication and processing of cross border transactions with a reduced risk of transcription errors. An IBAN uniquely identifies the account of a customer at a financial institution. It was originally adopted by the European Committee for Banking Standards (ECBS) and since 1997 as the international standard ISO 13616 under the International Organization for Standardization (ISO). The current version is ISO 13616:2020, which indicates the Society for Worldwide Interbank Financial Telecommunication (SWIFT) as the formal registrar. Initially developed to facilitate payments within the European Union, it has been implemented by most European countries and numerous countries in other parts of the world, mainly in the Middle East and the Caribbean. As of July 2023, 86 countries were using the IBAN numbering system. The IBAN consists of up to 34 alphanumeric characters comprising a country code; two check digits; and a number that includes the domestic bank account number, branch identifier, and potential routing information. The check digits enable a check of the bank account number to confirm its integrity before submitting a transaction. | 2001-11-18T02:23:36Z | 2023-11-25T15:57:05Z | [
"Template:Reflist",
"Template:Cite press release",
"Template:Audiovisual works",
"Template:Short description",
"Template:Update inline",
"Template:Ill",
"Template:Portal",
"Template:Color",
"Template:As of",
"Template:Main article",
"Template:Cite web",
"Template:Commons category",
"Template:ISO standards",
"Template:Bank codes and identification",
"Template:Var",
"Template:Legend",
"Template:Cite book",
"Template:Anchor",
"Template:Nowrap"
]
| https://en.wikipedia.org/wiki/International_Bank_Account_Number |
15,254 | Infinitive | Infinitive (abbreviated INF) is a linguistics term for certain verb forms existing in many languages, most often used as non-finite verbs. As with many linguistic concepts, there is not a single definition applicable to all languages. The name is derived from Late Latin [modus] infinitivus, a derivative of infinitus meaning "unlimited".
In traditional descriptions of English, the infinitive is the basic dictionary form of a verb when used non-finitely, with or without the particle to. Thus to go is an infinitive, as is go in a sentence like "I must go there" (but not in "I go there", where it is a finite verb). The form without to is called the bare infinitive, and the form with to is called the full infinitive or to-infinitive.
In many other languages the infinitive is a distinct single word, often with a characteristic inflective ending, like cantar ("[to] sing") in Portuguese, morir ("[to] die") in Spanish, manger ("[to] eat") in French, portare ("[to] carry") in Latin and Italian, lieben ("[to] love") in German, читать (chitat', "[to] read") in Russian, etc. However, some languages have no infinitive forms. Many Native American languages, Arabic, Asian languages such as Japanese, and some languages in Africa and Australia do not have direct equivalents to infinitives or verbal nouns. Instead, they use finite verb forms in ordinary clauses or various special constructions.
Being a verb, an infinitive may take objects and other complements and modifiers to form a verb phrase (called an infinitive phrase). Like other non-finite verb forms (like participles, converbs, gerunds and gerundives), infinitives do not generally have an expressed subject; thus an infinitive verb phrase also constitutes a complete non-finite clause, called an infinitive (infinitival) clause. Such phrases or clauses may play a variety of roles within sentences, often being nouns (for example being the subject of a sentence or being a complement of another verb), and sometimes being adverbs or other types of modifier. Many verb forms known as infinitives differ from gerunds (verbal nouns) in that they do not inflect for case or occur in adpositional phrases. Instead, infinitives often originate in earlier inflectional forms of verbal nouns. Unlike finite verbs, infinitives are not usually inflected for tense, person, etc. either, although some degree of inflection sometimes occurs; for example Latin has distinct active and passive infinitives.
An infinitive phrase is a verb phrase constructed with the verb in infinitive form. This consists of the verb together with its objects and other complements and modifiers. Some examples of infinitive phrases in English are given below – these may be based on either the full infinitive (introduced by the particle to) or the bare infinitive (without the particle to).
Infinitive phrases often have an implied grammatical subject making them effectively clauses rather than phrases. Such infinitive clauses or infinitival clauses, are one of several kinds of non-finite clause. They can play various grammatical roles like a constituent of a larger clause or sentence; for example it may form a noun phrase or adverb. Infinitival clauses may be embedded within each other in complex ways, like in the sentence:
Here the infinitival clause to get married is contained within the finite dependent clause that John Welborn is going to get married to Blair; this in turn is contained within another infinitival clause, which is contained in the finite independent clause (the whole sentence).
The grammatical structure of an infinitival clause may differ from that of a corresponding finite clause. For example, in German, the infinitive form of the verb usually goes to the end of its clause, whereas a finite verb (in an independent clause) typically comes in second position.
Following certain verbs or prepositions, infinitives commonly do have an implicit subject, e.g.,
As these examples illustrate, the implicit subject of the infinitive occurs in the objective case (them, him) in contrast to the nominative case that occurs with a finite verb, e.g., "They ate their dinner." Such accusative and infinitive constructions are present in Latin and Ancient Greek, as well as many modern languages. The atypical case regarding the implicit subject of an infinitive is an example of exceptional case-marking. As shown in the above examples, the object of the transitive verb "want" and the preposition "for" allude to their respective pronouns' subjective role within the clauses.
In some languages, infinitives may be marked for grammatical categories like voice, aspect, and to some extent tense. This may be done by inflection, as with the Latin perfect and passive infinitives, or by periphrasis (with the use of auxiliary verbs), as with the Latin future infinitives or the English perfect and progressive infinitives.
Latin has present, perfect and future infinitives, with active and passive forms of each. For details see Latin conjugation § Infinitives.
English has infinitive constructions that are marked (periphrastically) for aspect: perfect, progressive (continuous), or a combination of the two (perfect progressive). These can also be marked for passive voice (as can the plain infinitive):
Further constructions can be made with other auxiliary-like expressions, like (to) be going to eat or (to) be about to eat, which have future meaning. For more examples of the above types of construction, see Uses of English verb forms § Perfect and progressive non-finite constructions.
Perfect infinitives are also found in other European languages that have perfect forms with auxiliaries similarly to English. For example, avoir mangé means "(to) have eaten" in French.
Regarding English, the term "infinitive" is traditionally applied to the unmarked form of the verb (the "plain form") when it forms a non-finite verb, whether or not introduced by the particle to. Hence sit and to sit, as used in the following sentences, would each be considered an infinitive:
The form without to is called the bare infinitive; the form introduced by to is called the full infinitive or to-infinitive.
The other non-finite verb forms in English are the gerund or present participle (the -ing form), and the past participle – these are not considered infinitives. Moreover, the unmarked form of the verb is not considered an infinitive when it forms a finite verb: like a present indicative ("I sit every day"), subjunctive ("I suggest that he sit"), or imperative ("Sit down!"). (For some irregular verbs the form of the infinitive coincides additionally with that of the past tense and/or past participle, like in the case of put.)
Certain auxiliary verbs are defective in that they do not have infinitives (or any other non-finite forms). This applies to the modal verbs (can, must, etc.), as well as certain related auxiliaries like the had of had better and the used of used to. (Periphrases can be employed instead in some cases, like (to) be able to for can, and (to) have to for must.) It also applies to the auxiliary do, as used in questions, negatives and emphasis as described under do-support. (Infinitives are negated by simply preceding them with not. Of course the verb do when forming a main verb can appear in the infinitive.) However, the auxiliary verbs have (used to form the perfect) and be (used to form the passive voice and continuous aspect) both commonly appear in the infinitive: "I should have finished by now"; "It's thought to have been a burial site"; "Let him be released"; "I hope to be working tomorrow."
Huddleston and Pullum's Cambridge Grammar of the English Language (2002) does not use the notion of the "infinitive" ("there is no form in the English verb paradigm called 'the infinitive'"), only that of the infinitival clause, noting that English uses the same form of the verb, the plain form, in infinitival clauses that it uses in imperative and present-subjunctive clauses.
A matter of controversy among prescriptive grammarians and style writers has been the appropriateness of separating the two words of the to-infinitive (as in "I expect to happily sit here"). For details of this, see split infinitive. Opposing linguistic theories typically do not consider the to-infinitive a distinct constituent, instead regarding the scope of the particle to as an entire verb phrase; thus, to buy a car is parsed like to [buy [a car]], not like [to buy] [a car].
The bare infinitive and the to-infinitive have a variety of uses in English. The two forms are mostly in complementary distribution – certain contexts call for one, and certain contexts for the other; they are not normally interchangeable, except in occasional instances like after the verb help, where either can be used.
The main uses of infinitives (or infinitive phrases) are as follows:
The infinitive is also the usual dictionary form or citation form of a verb. The form listed in dictionaries is the bare infinitive, although the to-infinitive is often used in referring to verbs or in defining other verbs: "The word 'amble' means 'to walk slowly'"; "How do we conjugate the verb to go?"
For further detail and examples of the uses of infinitives in English, see Bare infinitive and To-infinitive in the article on uses of English verb forms.
The original Proto-Germanic ending of the infinitive was -an, with verbs derived from other words ending in -jan or -janan.
In German it is -en ("sagen"), with -eln or -ern endings on a few words based on -l or -r roots ("segeln", "ändern"). The use of zu with infinitives is similar to English to, but is less frequent than in English. German infinitives can form nouns, often expressing abstractions of the action, in which case they are of neuter gender: das Essen means the eating, but also the food.
In Dutch infinitives also end in -en (zeggen — to say), sometimes used with te similar to English to, e.g., "Het is niet moeilijk te begrijpen" → "It is not hard to understand." The few verbs with stems ending in -a have infinitives in -n (gaan — to go, slaan — to hit). Afrikaans has lost the distinction between the infinitive and present forms of verbs, with the exception of the verbs "wees" (to be), which admits the present form "is", and the verb "hê" (to have), whose present form is "het".
In North Germanic languages the final -n was lost from the infinitive as early as 500–540 AD, reducing the suffix to -a. Later it has been further reduced to -e in Danish and some Norwegian dialects (including the written majority language bokmål). In the majority of Eastern Norwegian dialects and a few bordering Western Swedish dialects the reduction to -e was only partial, leaving some infinitives in -a and others in -e (å laga vs. å kaste). In northern parts of Norway the infinitive suffix is completely lost (å lag’ vs. å kast’) or only the -a is kept (å laga vs. å kast’). The infinitives of these languages are inflected for passive voice through the addition of -s or -st to the active form. This suffix appearance in Old Norse was a contraction of mik (“me”, forming -mk) or sik (reflexive pronoun, forming -sk) and was originally expressing reflexive actions: (hann) kallar (“[he] calls”) + -sik (“himself”) > (hann) kallask (“[he] calls himself”). The suffixes -mk and -sk later merged to -s, which evolved to -st in the western dialects. The loss or reduction of -a in active voice in Norwegian did not occur in the passive forms (-ast, -as), except for some dialects that have -es. The other North Germanic languages have the same vowel in both forms.
The formation of the infinitive in the Romance languages reflects that in their ancestor, Latin, almost all verbs had an infinitive ending with -re (preceded by one of various thematic vowels). For example, in Italian infinitives end in -are, -ere, -rre (rare), or -ire (which is still identical to the Latin forms), and in -arsi, -ersi, -rsi, -irsi for the reflexive forms. In Spanish and Portuguese, infinitives end in -ar, -er, or -ir (Spanish also has reflexive forms in -arse, -erse, -irse), while similarly in French they typically end in -re, -er, oir, and -ir. In Romanian, both short and long-form infinitives exist; the so-called "long infinitives" end in -are, -ere, -ire and in modern speech are used exclusively as verbal nouns, while there are a few verbs that cannot be converted into the nominal long infinitive. The "short infinitives" used in verbal contexts (e.g., after an auxiliary verb) have the endings -a,-ea, -e, and -i (basically removing the ending in "-re"). In Romanian, the infinitive is usually replaced by a clause containing the conjunction să plus the subjunctive mood. The only verb that is modal in common modern Romanian is the verb a putea, to be able to. However, in popular speech the infinitive after a putea is also increasingly replaced by the subjunctive.
In all Romance languages, infinitives can also form nouns.
Latin infinitives challenged several of the generalizations about infinitives. They did inflect for voice (amare, "to love", amari, to be loved) and for tense (amare, "to love", amavisse, "to have loved"), and allowed for an overt expression of the subject (video Socratem currere, "I see Socrates running"). See Latin conjugation § Infinitives.
Romance languages inherited from Latin the possibility of an overt expression of the subject (as in Italian vedo Socrate correre). Moreover, the "inflected infinitive" (or "personal infinitive") found in Portuguese and Galician inflects for person and number. These, alongside Sardinian, are the only Indo-European languages that allow infinitives to take person and number endings. This helps to make infinitive clauses very common in these languages; for example, the English finite clause in order that you/she/we have... would be translated to Portuguese like para teres/ela ter/termos... (Portuguese is a null-subject language). The Portuguese personal infinitive has no proper tenses, only aspects (imperfect and perfect), but tenses can be expressed using periphrastic structures. For instance, "even though you sing/have sung/are going to sing" could be translated to "apesar de cantares/teres cantado/ires cantar".
Other Romance languages (including Spanish, Romanian, Catalan, and some Italian dialects) allow uninflected infinitives to combine with overt nominative subjects. For example, Spanish al abrir yo los ojos ("when I opened my eyes") or sin yo saberlo ("without my knowing about it").
In Ancient Greek the infinitive has four tenses (present, future, aorist, perfect) and three voices (active, middle, passive). Present and perfect have the same infinitive for both middle and passive, while future and aorist have separate middle and passive forms.
Thematic verbs form present active infinitives by adding to the stem the thematic vowel -ε- and the infinitive ending -εν, and contracts to -ειν, e.g., παιδεύ-ειν. Athematic verbs, and perfect actives and aorist passives, add the suffix -ναι instead, e.g., διδό-ναι. In the middle and passive, the present middle infinitive ending is -σθαι, e.g., δίδο-σθαι and most tenses of thematic verbs add an additional -ε- between the ending and the stem, e.g., παιδεύ-ε-σθαι.
The infinitive per se does not exist in Modern Greek. To see this, consider the ancient Greek ἐθέλω γράφειν “I want to write”. In modern Greek this becomes θέλω να γράψω “I want that I write”. In modern Greek, the infinitive has thus changed form and function and is used mainly in the formation of periphrastic tense forms and not with an article or alone. Instead of the Ancient Greek infinitive system γράφειν, γράψειν, γράψαι, γεγραφέναι, Modern Greek uses only the form γράψει, a development of the ancient Greek aorist infinitive γράψαι. This form is also invariable. The modern Greek infinitive has only two forms according to voice: for example, γράψει for the active voice and γραφ(τ)εί for the passive voice (coming from the ancient passive aorist infinitive γραφῆναι).
The infinitive in Russian usually ends in -t’ (ть) preceded by a thematic vowel, or -ti (ти), if not preceded by one; some verbs have a stem ending in a consonant and change the t to č’, like *mogt’ → moč’ (*могть → мочь) "can". Some other Balto-Slavic languages have the infinitive typically ending in, for example, -ć (sometimes -c) in Polish, -ť in Slovak, -t (formerly -ti) in Czech and Latvian (with a handful ending in -s on the latter), -ty (-ти) in Ukrainian, -ць (-ts') in Belarusian. Lithuanian infinitives end in -ti, Serbo-Croatian in -ti or -ći, and Slovenian in -ti or -či.
Serbian officially retains infinitives -ti or -ći, but is more flexible than the other slavic languages in breaking the infinitive through a clause. The infinitive nevertheless remains the dictionary form.
Bulgarian and Macedonian have lost the infinitive altogether except in a handful of frozen expressions where it is the same as the 3rd person singular aorist form. Almost all expressions where an infinitive may be used in Bulgarian are listed here; neverthess in all cases a subordinate clause is the more usual form. For that reason, the present first-person singular conjugation is the dictionary form in Bulgarian, while Macedonian uses the third person singular form of the verb in present tense.
Hebrew has two infinitives, the infinitive absolute (המקור המוחלט) and the infinitive construct (המקור הנטוי or שם הפועל). The infinitive construct is used after prepositions and is inflected with pronominal endings to indicate its subject or object: בכתוב הסופר bikhtōbh hassōphēr "when the scribe wrote", אחרי לכתו ahare lekhtō "after his going". When the infinitive construct is preceded by ל (lə-, li-, lā-, lo-) "to", it has a similar meaning to the English to-infinitive, and this is its most frequent use in Modern Hebrew. The infinitive absolute is used for verb focus and emphasis, like in מות ימות mōth yāmūth (literally "a dying he will die"; figuratively, "he shall indeed/surely die"). This usage is commonplace in the Hebrew Bible. In Modern Hebrew it is restricted to high-register literary works.
Note, however, that the to-infinitive of Hebrew is not the dictionary form; that is the third person singular past form.
The Finnish grammatical tradition includes many non-finite forms that are generally labeled as (numbered) infinitives although many of these are functionally converbs. To form the so-called first infinitive, the strong form of the root (without consonant gradation or epenthetic 'e') is used, and these changes occur:
As such, it is inconvenient for dictionary use, because the imperative would be closer to the root word. Nevertheless, dictionaries use the first infinitive.
There are also four other infinitives, plus a "long" form of the first:
Note that all of these must change to reflect vowel harmony, so the fifth infinitive (with a third-person suffix) of hypätä "jump" is hyppäämäisillään "he was about to jump", not *hyppäämaisillaan.
The Seri language of northwestern Mexico has infinitival forms used in two constructions (with the verb meaning 'want' and with the verb meaning 'be able'). The infinitive is formed by adding a prefix to the stem: either iha- [iʔa-] (plus a vowel change of certain vowel-initial stems) if the complement clause is transitive, or ica- [ika-] (and no vowel change) if the complement clause is intransitive. The infinitive shows agreement in number with the controlling subject. Examples are: icatax ihmiimzo 'I want to go', where icatax is the singular infinitive of the verb 'go' (singular root is -atax), and icalx hamiimcajc 'we want to go', where icalx is the plural infinitive. Examples of the transitive infinitive: ihaho 'to see it/him/her/them' (root -aho), and ihacta 'to look at it/him/her/them' (root -oocta).
In languages without an infinitive, the infinitive is translated either as a that-clause or as a verbal noun. For example, in Literary Arabic the sentence "I want to write a book" is translated as either urīdu an aktuba kitāban (lit. "I want that I write a book", with a verb in the subjunctive mood) or urīdu kitābata kitābin (lit. "I want the writing of a book", with the masdar or verbal noun), and in Levantine Colloquial Arabic biddi aktub kitāb (subordinate clause with verb in subjunctive).
Even in languages that have infinitives, similar constructions are sometimes necessary where English would allow the infinitive. For example, in French the sentence "I want you to come" translates to Je veux que vous veniez (lit. "I want that you come", come being in the subjunctive mood). However, "I want to come" is simply Je veux venir, using the infinitive, just as in English. In Russian, sentences such as "I want you to leave" do not use an infinitive. Rather, they use the conjunction чтобы "in order to/so that" with the past tense form (most probably remnant of subjunctive) of the verb: Я хочу, чтобы вы ушли (literally, "I want so that you left"). | [
{
"paragraph_id": 0,
"text": "Infinitive (abbreviated INF) is a linguistics term for certain verb forms existing in many languages, most often used as non-finite verbs. As with many linguistic concepts, there is not a single definition applicable to all languages. The name is derived from Late Latin [modus] infinitivus, a derivative of infinitus meaning \"unlimited\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "In traditional descriptions of English, the infinitive is the basic dictionary form of a verb when used non-finitely, with or without the particle to. Thus to go is an infinitive, as is go in a sentence like \"I must go there\" (but not in \"I go there\", where it is a finite verb). The form without to is called the bare infinitive, and the form with to is called the full infinitive or to-infinitive.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In many other languages the infinitive is a distinct single word, often with a characteristic inflective ending, like cantar (\"[to] sing\") in Portuguese, morir (\"[to] die\") in Spanish, manger (\"[to] eat\") in French, portare (\"[to] carry\") in Latin and Italian, lieben (\"[to] love\") in German, читать (chitat', \"[to] read\") in Russian, etc. However, some languages have no infinitive forms. Many Native American languages, Arabic, Asian languages such as Japanese, and some languages in Africa and Australia do not have direct equivalents to infinitives or verbal nouns. Instead, they use finite verb forms in ordinary clauses or various special constructions.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Being a verb, an infinitive may take objects and other complements and modifiers to form a verb phrase (called an infinitive phrase). Like other non-finite verb forms (like participles, converbs, gerunds and gerundives), infinitives do not generally have an expressed subject; thus an infinitive verb phrase also constitutes a complete non-finite clause, called an infinitive (infinitival) clause. Such phrases or clauses may play a variety of roles within sentences, often being nouns (for example being the subject of a sentence or being a complement of another verb), and sometimes being adverbs or other types of modifier. Many verb forms known as infinitives differ from gerunds (verbal nouns) in that they do not inflect for case or occur in adpositional phrases. Instead, infinitives often originate in earlier inflectional forms of verbal nouns. Unlike finite verbs, infinitives are not usually inflected for tense, person, etc. either, although some degree of inflection sometimes occurs; for example Latin has distinct active and passive infinitives.",
"title": ""
},
{
"paragraph_id": 4,
"text": "An infinitive phrase is a verb phrase constructed with the verb in infinitive form. This consists of the verb together with its objects and other complements and modifiers. Some examples of infinitive phrases in English are given below – these may be based on either the full infinitive (introduced by the particle to) or the bare infinitive (without the particle to).",
"title": "Phrases and clauses"
},
{
"paragraph_id": 5,
"text": "Infinitive phrases often have an implied grammatical subject making them effectively clauses rather than phrases. Such infinitive clauses or infinitival clauses, are one of several kinds of non-finite clause. They can play various grammatical roles like a constituent of a larger clause or sentence; for example it may form a noun phrase or adverb. Infinitival clauses may be embedded within each other in complex ways, like in the sentence:",
"title": "Phrases and clauses"
},
{
"paragraph_id": 6,
"text": "Here the infinitival clause to get married is contained within the finite dependent clause that John Welborn is going to get married to Blair; this in turn is contained within another infinitival clause, which is contained in the finite independent clause (the whole sentence).",
"title": "Phrases and clauses"
},
{
"paragraph_id": 7,
"text": "The grammatical structure of an infinitival clause may differ from that of a corresponding finite clause. For example, in German, the infinitive form of the verb usually goes to the end of its clause, whereas a finite verb (in an independent clause) typically comes in second position.",
"title": "Phrases and clauses"
},
{
"paragraph_id": 8,
"text": "Following certain verbs or prepositions, infinitives commonly do have an implicit subject, e.g.,",
"title": "Clauses with implicit subject in the objective case"
},
{
"paragraph_id": 9,
"text": "As these examples illustrate, the implicit subject of the infinitive occurs in the objective case (them, him) in contrast to the nominative case that occurs with a finite verb, e.g., \"They ate their dinner.\" Such accusative and infinitive constructions are present in Latin and Ancient Greek, as well as many modern languages. The atypical case regarding the implicit subject of an infinitive is an example of exceptional case-marking. As shown in the above examples, the object of the transitive verb \"want\" and the preposition \"for\" allude to their respective pronouns' subjective role within the clauses.",
"title": "Clauses with implicit subject in the objective case"
},
{
"paragraph_id": 10,
"text": "In some languages, infinitives may be marked for grammatical categories like voice, aspect, and to some extent tense. This may be done by inflection, as with the Latin perfect and passive infinitives, or by periphrasis (with the use of auxiliary verbs), as with the Latin future infinitives or the English perfect and progressive infinitives.",
"title": "Marking for tense, aspect and voice "
},
{
"paragraph_id": 11,
"text": "Latin has present, perfect and future infinitives, with active and passive forms of each. For details see Latin conjugation § Infinitives.",
"title": "Marking for tense, aspect and voice "
},
{
"paragraph_id": 12,
"text": "English has infinitive constructions that are marked (periphrastically) for aspect: perfect, progressive (continuous), or a combination of the two (perfect progressive). These can also be marked for passive voice (as can the plain infinitive):",
"title": "Marking for tense, aspect and voice "
},
{
"paragraph_id": 13,
"text": "Further constructions can be made with other auxiliary-like expressions, like (to) be going to eat or (to) be about to eat, which have future meaning. For more examples of the above types of construction, see Uses of English verb forms § Perfect and progressive non-finite constructions.",
"title": "Marking for tense, aspect and voice "
},
{
"paragraph_id": 14,
"text": "Perfect infinitives are also found in other European languages that have perfect forms with auxiliaries similarly to English. For example, avoir mangé means \"(to) have eaten\" in French.",
"title": "Marking for tense, aspect and voice "
},
{
"paragraph_id": 15,
"text": "Regarding English, the term \"infinitive\" is traditionally applied to the unmarked form of the verb (the \"plain form\") when it forms a non-finite verb, whether or not introduced by the particle to. Hence sit and to sit, as used in the following sentences, would each be considered an infinitive:",
"title": "English"
},
{
"paragraph_id": 16,
"text": "The form without to is called the bare infinitive; the form introduced by to is called the full infinitive or to-infinitive.",
"title": "English"
},
{
"paragraph_id": 17,
"text": "The other non-finite verb forms in English are the gerund or present participle (the -ing form), and the past participle – these are not considered infinitives. Moreover, the unmarked form of the verb is not considered an infinitive when it forms a finite verb: like a present indicative (\"I sit every day\"), subjunctive (\"I suggest that he sit\"), or imperative (\"Sit down!\"). (For some irregular verbs the form of the infinitive coincides additionally with that of the past tense and/or past participle, like in the case of put.)",
"title": "English"
},
{
"paragraph_id": 18,
"text": "Certain auxiliary verbs are defective in that they do not have infinitives (or any other non-finite forms). This applies to the modal verbs (can, must, etc.), as well as certain related auxiliaries like the had of had better and the used of used to. (Periphrases can be employed instead in some cases, like (to) be able to for can, and (to) have to for must.) It also applies to the auxiliary do, as used in questions, negatives and emphasis as described under do-support. (Infinitives are negated by simply preceding them with not. Of course the verb do when forming a main verb can appear in the infinitive.) However, the auxiliary verbs have (used to form the perfect) and be (used to form the passive voice and continuous aspect) both commonly appear in the infinitive: \"I should have finished by now\"; \"It's thought to have been a burial site\"; \"Let him be released\"; \"I hope to be working tomorrow.\"",
"title": "English"
},
{
"paragraph_id": 19,
"text": "Huddleston and Pullum's Cambridge Grammar of the English Language (2002) does not use the notion of the \"infinitive\" (\"there is no form in the English verb paradigm called 'the infinitive'\"), only that of the infinitival clause, noting that English uses the same form of the verb, the plain form, in infinitival clauses that it uses in imperative and present-subjunctive clauses.",
"title": "English"
},
{
"paragraph_id": 20,
"text": "A matter of controversy among prescriptive grammarians and style writers has been the appropriateness of separating the two words of the to-infinitive (as in \"I expect to happily sit here\"). For details of this, see split infinitive. Opposing linguistic theories typically do not consider the to-infinitive a distinct constituent, instead regarding the scope of the particle to as an entire verb phrase; thus, to buy a car is parsed like to [buy [a car]], not like [to buy] [a car].",
"title": "English"
},
{
"paragraph_id": 21,
"text": "The bare infinitive and the to-infinitive have a variety of uses in English. The two forms are mostly in complementary distribution – certain contexts call for one, and certain contexts for the other; they are not normally interchangeable, except in occasional instances like after the verb help, where either can be used.",
"title": "English"
},
{
"paragraph_id": 22,
"text": "The main uses of infinitives (or infinitive phrases) are as follows:",
"title": "English"
},
{
"paragraph_id": 23,
"text": "The infinitive is also the usual dictionary form or citation form of a verb. The form listed in dictionaries is the bare infinitive, although the to-infinitive is often used in referring to verbs or in defining other verbs: \"The word 'amble' means 'to walk slowly'\"; \"How do we conjugate the verb to go?\"",
"title": "English"
},
{
"paragraph_id": 24,
"text": "For further detail and examples of the uses of infinitives in English, see Bare infinitive and To-infinitive in the article on uses of English verb forms.",
"title": "English"
},
{
"paragraph_id": 25,
"text": "The original Proto-Germanic ending of the infinitive was -an, with verbs derived from other words ending in -jan or -janan.",
"title": "Other Germanic languages"
},
{
"paragraph_id": 26,
"text": "In German it is -en (\"sagen\"), with -eln or -ern endings on a few words based on -l or -r roots (\"segeln\", \"ändern\"). The use of zu with infinitives is similar to English to, but is less frequent than in English. German infinitives can form nouns, often expressing abstractions of the action, in which case they are of neuter gender: das Essen means the eating, but also the food.",
"title": "Other Germanic languages"
},
{
"paragraph_id": 27,
"text": "In Dutch infinitives also end in -en (zeggen — to say), sometimes used with te similar to English to, e.g., \"Het is niet moeilijk te begrijpen\" → \"It is not hard to understand.\" The few verbs with stems ending in -a have infinitives in -n (gaan — to go, slaan — to hit). Afrikaans has lost the distinction between the infinitive and present forms of verbs, with the exception of the verbs \"wees\" (to be), which admits the present form \"is\", and the verb \"hê\" (to have), whose present form is \"het\".",
"title": "Other Germanic languages"
},
{
"paragraph_id": 28,
"text": "In North Germanic languages the final -n was lost from the infinitive as early as 500–540 AD, reducing the suffix to -a. Later it has been further reduced to -e in Danish and some Norwegian dialects (including the written majority language bokmål). In the majority of Eastern Norwegian dialects and a few bordering Western Swedish dialects the reduction to -e was only partial, leaving some infinitives in -a and others in -e (å laga vs. å kaste). In northern parts of Norway the infinitive suffix is completely lost (å lag’ vs. å kast’) or only the -a is kept (å laga vs. å kast’). The infinitives of these languages are inflected for passive voice through the addition of -s or -st to the active form. This suffix appearance in Old Norse was a contraction of mik (“me”, forming -mk) or sik (reflexive pronoun, forming -sk) and was originally expressing reflexive actions: (hann) kallar (“[he] calls”) + -sik (“himself”) > (hann) kallask (“[he] calls himself”). The suffixes -mk and -sk later merged to -s, which evolved to -st in the western dialects. The loss or reduction of -a in active voice in Norwegian did not occur in the passive forms (-ast, -as), except for some dialects that have -es. The other North Germanic languages have the same vowel in both forms.",
"title": "Other Germanic languages"
},
{
"paragraph_id": 29,
"text": "The formation of the infinitive in the Romance languages reflects that in their ancestor, Latin, almost all verbs had an infinitive ending with -re (preceded by one of various thematic vowels). For example, in Italian infinitives end in -are, -ere, -rre (rare), or -ire (which is still identical to the Latin forms), and in -arsi, -ersi, -rsi, -irsi for the reflexive forms. In Spanish and Portuguese, infinitives end in -ar, -er, or -ir (Spanish also has reflexive forms in -arse, -erse, -irse), while similarly in French they typically end in -re, -er, oir, and -ir. In Romanian, both short and long-form infinitives exist; the so-called \"long infinitives\" end in -are, -ere, -ire and in modern speech are used exclusively as verbal nouns, while there are a few verbs that cannot be converted into the nominal long infinitive. The \"short infinitives\" used in verbal contexts (e.g., after an auxiliary verb) have the endings -a,-ea, -e, and -i (basically removing the ending in \"-re\"). In Romanian, the infinitive is usually replaced by a clause containing the conjunction să plus the subjunctive mood. The only verb that is modal in common modern Romanian is the verb a putea, to be able to. However, in popular speech the infinitive after a putea is also increasingly replaced by the subjunctive.",
"title": "Latin and Romance languages"
},
{
"paragraph_id": 30,
"text": "In all Romance languages, infinitives can also form nouns.",
"title": "Latin and Romance languages"
},
{
"paragraph_id": 31,
"text": "Latin infinitives challenged several of the generalizations about infinitives. They did inflect for voice (amare, \"to love\", amari, to be loved) and for tense (amare, \"to love\", amavisse, \"to have loved\"), and allowed for an overt expression of the subject (video Socratem currere, \"I see Socrates running\"). See Latin conjugation § Infinitives.",
"title": "Latin and Romance languages"
},
{
"paragraph_id": 32,
"text": "Romance languages inherited from Latin the possibility of an overt expression of the subject (as in Italian vedo Socrate correre). Moreover, the \"inflected infinitive\" (or \"personal infinitive\") found in Portuguese and Galician inflects for person and number. These, alongside Sardinian, are the only Indo-European languages that allow infinitives to take person and number endings. This helps to make infinitive clauses very common in these languages; for example, the English finite clause in order that you/she/we have... would be translated to Portuguese like para teres/ela ter/termos... (Portuguese is a null-subject language). The Portuguese personal infinitive has no proper tenses, only aspects (imperfect and perfect), but tenses can be expressed using periphrastic structures. For instance, \"even though you sing/have sung/are going to sing\" could be translated to \"apesar de cantares/teres cantado/ires cantar\".",
"title": "Latin and Romance languages"
},
{
"paragraph_id": 33,
"text": "Other Romance languages (including Spanish, Romanian, Catalan, and some Italian dialects) allow uninflected infinitives to combine with overt nominative subjects. For example, Spanish al abrir yo los ojos (\"when I opened my eyes\") or sin yo saberlo (\"without my knowing about it\").",
"title": "Latin and Romance languages"
},
{
"paragraph_id": 34,
"text": "In Ancient Greek the infinitive has four tenses (present, future, aorist, perfect) and three voices (active, middle, passive). Present and perfect have the same infinitive for both middle and passive, while future and aorist have separate middle and passive forms.",
"title": "Hellenic languages"
},
{
"paragraph_id": 35,
"text": "Thematic verbs form present active infinitives by adding to the stem the thematic vowel -ε- and the infinitive ending -εν, and contracts to -ειν, e.g., παιδεύ-ειν. Athematic verbs, and perfect actives and aorist passives, add the suffix -ναι instead, e.g., διδό-ναι. In the middle and passive, the present middle infinitive ending is -σθαι, e.g., δίδο-σθαι and most tenses of thematic verbs add an additional -ε- between the ending and the stem, e.g., παιδεύ-ε-σθαι.",
"title": "Hellenic languages"
},
{
"paragraph_id": 36,
"text": "The infinitive per se does not exist in Modern Greek. To see this, consider the ancient Greek ἐθέλω γράφειν “I want to write”. In modern Greek this becomes θέλω να γράψω “I want that I write”. In modern Greek, the infinitive has thus changed form and function and is used mainly in the formation of periphrastic tense forms and not with an article or alone. Instead of the Ancient Greek infinitive system γράφειν, γράψειν, γράψαι, γεγραφέναι, Modern Greek uses only the form γράψει, a development of the ancient Greek aorist infinitive γράψαι. This form is also invariable. The modern Greek infinitive has only two forms according to voice: for example, γράψει for the active voice and γραφ(τ)εί for the passive voice (coming from the ancient passive aorist infinitive γραφῆναι).",
"title": "Hellenic languages"
},
{
"paragraph_id": 37,
"text": "The infinitive in Russian usually ends in -t’ (ть) preceded by a thematic vowel, or -ti (ти), if not preceded by one; some verbs have a stem ending in a consonant and change the t to č’, like *mogt’ → moč’ (*могть → мочь) \"can\". Some other Balto-Slavic languages have the infinitive typically ending in, for example, -ć (sometimes -c) in Polish, -ť in Slovak, -t (formerly -ti) in Czech and Latvian (with a handful ending in -s on the latter), -ty (-ти) in Ukrainian, -ць (-ts') in Belarusian. Lithuanian infinitives end in -ti, Serbo-Croatian in -ti or -ći, and Slovenian in -ti or -či.",
"title": "Balto-Slavic languages"
},
{
"paragraph_id": 38,
"text": "Serbian officially retains infinitives -ti or -ći, but is more flexible than the other slavic languages in breaking the infinitive through a clause. The infinitive nevertheless remains the dictionary form.",
"title": "Balto-Slavic languages"
},
{
"paragraph_id": 39,
"text": "Bulgarian and Macedonian have lost the infinitive altogether except in a handful of frozen expressions where it is the same as the 3rd person singular aorist form. Almost all expressions where an infinitive may be used in Bulgarian are listed here; neverthess in all cases a subordinate clause is the more usual form. For that reason, the present first-person singular conjugation is the dictionary form in Bulgarian, while Macedonian uses the third person singular form of the verb in present tense.",
"title": "Balto-Slavic languages"
},
{
"paragraph_id": 40,
"text": "Hebrew has two infinitives, the infinitive absolute (המקור המוחלט) and the infinitive construct (המקור הנטוי or שם הפועל). The infinitive construct is used after prepositions and is inflected with pronominal endings to indicate its subject or object: בכתוב הסופר bikhtōbh hassōphēr \"when the scribe wrote\", אחרי לכתו ahare lekhtō \"after his going\". When the infinitive construct is preceded by ל (lə-, li-, lā-, lo-) \"to\", it has a similar meaning to the English to-infinitive, and this is its most frequent use in Modern Hebrew. The infinitive absolute is used for verb focus and emphasis, like in מות ימות mōth yāmūth (literally \"a dying he will die\"; figuratively, \"he shall indeed/surely die\"). This usage is commonplace in the Hebrew Bible. In Modern Hebrew it is restricted to high-register literary works.",
"title": "Hebrew"
},
{
"paragraph_id": 41,
"text": "Note, however, that the to-infinitive of Hebrew is not the dictionary form; that is the third person singular past form.",
"title": "Hebrew"
},
{
"paragraph_id": 42,
"text": "The Finnish grammatical tradition includes many non-finite forms that are generally labeled as (numbered) infinitives although many of these are functionally converbs. To form the so-called first infinitive, the strong form of the root (without consonant gradation or epenthetic 'e') is used, and these changes occur:",
"title": "Finnish"
},
{
"paragraph_id": 43,
"text": "As such, it is inconvenient for dictionary use, because the imperative would be closer to the root word. Nevertheless, dictionaries use the first infinitive.",
"title": "Finnish"
},
{
"paragraph_id": 44,
"text": "There are also four other infinitives, plus a \"long\" form of the first:",
"title": "Finnish"
},
{
"paragraph_id": 45,
"text": "Note that all of these must change to reflect vowel harmony, so the fifth infinitive (with a third-person suffix) of hypätä \"jump\" is hyppäämäisillään \"he was about to jump\", not *hyppäämaisillaan.",
"title": "Finnish"
},
{
"paragraph_id": 46,
"text": "The Seri language of northwestern Mexico has infinitival forms used in two constructions (with the verb meaning 'want' and with the verb meaning 'be able'). The infinitive is formed by adding a prefix to the stem: either iha- [iʔa-] (plus a vowel change of certain vowel-initial stems) if the complement clause is transitive, or ica- [ika-] (and no vowel change) if the complement clause is intransitive. The infinitive shows agreement in number with the controlling subject. Examples are: icatax ihmiimzo 'I want to go', where icatax is the singular infinitive of the verb 'go' (singular root is -atax), and icalx hamiimcajc 'we want to go', where icalx is the plural infinitive. Examples of the transitive infinitive: ihaho 'to see it/him/her/them' (root -aho), and ihacta 'to look at it/him/her/them' (root -oocta).",
"title": "Seri"
},
{
"paragraph_id": 47,
"text": "In languages without an infinitive, the infinitive is translated either as a that-clause or as a verbal noun. For example, in Literary Arabic the sentence \"I want to write a book\" is translated as either urīdu an aktuba kitāban (lit. \"I want that I write a book\", with a verb in the subjunctive mood) or urīdu kitābata kitābin (lit. \"I want the writing of a book\", with the masdar or verbal noun), and in Levantine Colloquial Arabic biddi aktub kitāb (subordinate clause with verb in subjunctive).",
"title": "Translation to languages without an infinitive"
},
{
"paragraph_id": 48,
"text": "Even in languages that have infinitives, similar constructions are sometimes necessary where English would allow the infinitive. For example, in French the sentence \"I want you to come\" translates to Je veux que vous veniez (lit. \"I want that you come\", come being in the subjunctive mood). However, \"I want to come\" is simply Je veux venir, using the infinitive, just as in English. In Russian, sentences such as \"I want you to leave\" do not use an infinitive. Rather, they use the conjunction чтобы \"in order to/so that\" with the past tense form (most probably remnant of subjunctive) of the verb: Я хочу, чтобы вы ушли (literally, \"I want so that you left\").",
"title": "Translation to languages without an infinitive"
}
]
| Infinitive is a linguistics term for certain verb forms existing in many languages, most often used as non-finite verbs. As with many linguistic concepts, there is not a single definition applicable to all languages. The name is derived from Late Latin [modus] infinitivus, a derivative of infinitus meaning "unlimited". In traditional descriptions of English, the infinitive is the basic dictionary form of a verb when used non-finitely, with or without the particle to. Thus to go is an infinitive, as is go in a sentence like "I must go there". The form without to is called the bare infinitive, and the form with to is called the full infinitive or to-infinitive. In many other languages the infinitive is a distinct single word, often with a characteristic inflective ending, like cantar in Portuguese, morir in Spanish, manger in French, portare in Latin and Italian, lieben in German, читать in Russian, etc. However, some languages have no infinitive forms. Many Native American languages, Arabic, Asian languages such as Japanese, and some languages in Africa and Australia do not have direct equivalents to infinitives or verbal nouns. Instead, they use finite verb forms in ordinary clauses or various special constructions. Being a verb, an infinitive may take objects and other complements and modifiers to form a verb phrase. Like other non-finite verb forms, infinitives do not generally have an expressed subject; thus an infinitive verb phrase also constitutes a complete non-finite clause, called an infinitive (infinitival) clause. Such phrases or clauses may play a variety of roles within sentences, often being nouns, and sometimes being adverbs or other types of modifier. Many verb forms known as infinitives differ from gerunds in that they do not inflect for case or occur in adpositional phrases. Instead, infinitives often originate in earlier inflectional forms of verbal nouns. Unlike finite verbs, infinitives are not usually inflected for tense, person, etc. either, although some degree of inflection sometimes occurs; for example Latin has distinct active and passive infinitives. | 2001-11-18T08:45:15Z | 2023-12-27T13:23:22Z | [
"Template:Slink",
"Template:Script/Hebrew",
"Template:Wiktionary",
"Template:Citation",
"Template:Lang",
"Template:Anchor",
"Template:See also",
"Template:Citation needed",
"Template:Cite journal",
"Template:Cite book",
"Template:Cite thesis",
"Template:Sc",
"Template:Lexical categories",
"Template:Short description",
"Template:Main",
"Template:IPA",
"Template:Reflist",
"Template:Authority control"
]
| https://en.wikipedia.org/wiki/Infinitive |
15,256 | Immaculate Conception | The Immaculate Conception is the belief that the Virgin Mary was free of original sin from the moment of her conception. It is one of the four Marian dogmas of the Catholic Church. Debated by medieval theologians, it was not defined as a dogma until 1854, by Pope Pius IX in the papal bull Ineffabilis Deus. While the Immaculate Conception asserts Mary's freedom from original sin, the Council of Trent, held between 1545 and 1563, had previously affirmed her freedom from personal sin.
The Immaculate Conception became a popular subject in literature, but its abstract nature meant it was late in appearing as a subject in works of art. The iconography of Our Lady of the Immaculate Conception shows Mary standing, with arms outstretched or hands clasped in prayer. The feast day of the Immaculate Conception is December 8.
Many Protestant churches rejected the doctrine of the Immaculate Conception as unscriptural, though some Anglicans accept it as a pious devotion. Opinions on the Immaculate Conception in Oriental Orthodoxy are divided: Shenouda III, Pope of the Coptic Orthodox Church, opposed the teaching, as did Patriarch Ignatius Zakka I of the Syriac Orthodox Church; the Eritrean and Ethiopian Orthodox Tewahedo accept it. It is not accepted by Eastern Orthodoxy due to differences in the understanding of original sin, although they do affirm Mary's purity and preservation from sin. Patriarch Anthimus VII of Constantinople (1827–1913) characterized the dogma of the Immaculate Conception as a "Roman novelty".
Anne, the mother of Mary, first appears in the 2nd-century apocryphal Gospel of James, and the author created his story by drawing on Greek tales of the childhood of heroes and on the Old Testament story of Hannah (hence the name Anna/Anne), the mother of the biblical Samuel. Anne and her husband, Joachim, are infertile, but God hears their prayers and Mary is conceived. Within the Gospel of James, the conception occurs without sexual intercourse between Anne and Joachim, which fits well with the Gospel of James' persistent emphasis on Mary's sacred purity, but the story does not advance the idea of an immaculate conception. The author of the Gospel of James may have based this account of Mary's conception on that of John the Baptist as recounted in the Gospel of Luke. The Eastern Orthodox Church holds that "Mary is conceived by her parents as we are all conceived".
According to the Catholic Encyclopedia, Justin Martyr, Irenaeus, and Cyril of Jerusalem developed the idea of Mary as the New Eve, drawing comparison to Eve, while yet immaculate and incorrupt — that is to say, not subject to original sin. The encyclopedia adds that Ephrem the Syrian said she was as innocent as Eve before the Fall.
Ambrose asserted Mary's incorruptibility, attributing her virginity to grace and immunity from sin. Severus, Bishop of Antioch, concurred affirming Mary's purity and immaculateness. John Damascene extended the supernatural influence of God to Mary's parents, suggesting they were purified by the Holy Spirit during her generation. According to Damascene, even the material of Mary's origin was deemed pure and holy. This perspective, which emphasized an immaculate active generation and the sanctity of the conceptio carnis, found resonance among some Western authors. Notably, the Greek Fathers did not explicitly discuss the Immaculate Conception.
By the 4th century the idea that Mary was free from sin was generally more widespread, but original sin raised the question of whether she was also free of the sin passed down from Adam. The question became acute when the feast of her conception began to be celebrated in England in the 11th century, and the opponents of the feast of Mary's conception brought forth the objection that as sexual intercourse is sinful, to celebrate Mary's conception was to celebrate a sinful event. The feast of Mary's conception originated in the Eastern Church in the 7th century, reached England in the 11th, and from there spread to Europe, where it was given official approval in 1477 and extended to the whole church in 1693; the word "immaculate" was not officially added to the name of the feast until 1854.
The doctrine of the Immaculate Conception caused a virtual civil war between Franciscans and Dominicans during the Middle Ages, with Franciscan 'Scotists' in its favour and Dominican 'Thomists' against it. The English ecclesiastic and scholar Eadmer (c. 1060 – c. 1126) reasoned that it was possible that Mary was conceived without original sin in view of God's omnipotence, and that it was also appropriate in view of her role as Mother of God: Potuit, decuit, fecit, "it was possible, it was fitting, therefore it was done". Others, including Bernard of Clairvaux (1090–1153) and Thomas Aquinas (1225–1274), objected that if Mary were free of original sin at her conception then she would have no need of redemption, making Christ's saving redemption superfluous; they were answered by Duns Scotus (1264–1308), who "developed the idea of preservative redemption as being a more perfect one: to have been preserved free from original sin was a greater grace than to be set free from sin". In 1439, the Council of Basel, in schism with Pope Eugene IV who resided at the Council of Florence, declared the Immaculate Conception a "pious opinion" consistent with faith and Scripture; the Council of Trent, held in several sessions in the early 1500s, made no explicit declaration on the subject but exempted her from the universality of original sin; and also affirmed that she remained during all her life free from all stain of sin, even the venial one.; by 1571 the revised Roman Breviary set out an elaborate celebration of the Feast of the Immaculate Conception on 8 December.
The eventual creation of the dogma was due more to popular devotion than scholarship. The Immaculate Conception became a popular subject in literature and art, and some devotees went so far as to hold that Anne had conceived Mary by kissing her husband Joachim, and that Anne's father and grandmother had likewise been conceived without sexual intercourse, although Bridget of Sweden (c. 1303–1373) told how Mary herself had revealed to her that Anne and Joachim conceived their daughter through a sexual union which was sinless because it was pure and free of sexual lust.
In the 16th and especially the 17th centuries there was a proliferation of Immaculatist devotion in Spain, leading the Habsburg monarchs to demand that the papacy elevate the belief to the status of dogma. In France in 1830 Catherine Labouré (May 2, 1806 – December 31, 1876) saw a vision of Mary standing on a globe while a voice commanded her to have a medal made in imitation of what she saw. The medal said "O Mary, conceived without sin, pray for us who have recourse to thee", which was a confirmation of Mary herself that she was conceived without sin, confirming the Immaculate Conception. Her vision marked the beginning of a great 19th-century Marian revival.
In 1849 Pope Pius IX issued the encyclical Ubi primum soliciting the bishops of the church for their views on whether the doctrine should be defined as dogma; ninety percent of those who responded were supportive, although the Archbishop of Paris, Marie-Dominique-Auguste Sibour, warned that the Immaculate Conception "could be proved neither from the Scriptures nor from tradition", and in 1854 the Immaculate Conception dogma was proclaimed with the bull Ineffabilis Deus.
We declare, pronounce, and define that the doctrine which holds that the most Blessed Virgin Mary, in the first instance of her conception, by a singular grace and privilege granted by Almighty God, in view of the merits of Jesus Christ, the Saviour of the human race, was preserved free from all stain of original sin, is a doctrine revealed by God and therefore to be believed firmly and constantly by all the faithful.
Dom Prosper Guéranger, Abbot of Solesmes Abbey, who had been one of the main promoters of the dogmatic statement, wrote Mémoire sur l'Immaculée Conception, explaining what he saw as its basis:
For the belief to be defined as a dogma of faith [...] it is necessary that the Immaculate Conception form part of Revelation, expressed in Scripture or Tradition, or be implied in beliefs previously defined. Needed, afterward, is that it be proposed to the faith of the faithful through the teaching of the ordinary magisterium. Finally, it is necessary that it be attested by the liturgy, and the Fathers and Doctors of the Church.
Guéranger maintained that these conditions were met and that the definition was therefore possible. Ineffabilis Deus found the Immaculate Conception in the Ark of Salvation (Noah's Ark), Jacob's Ladder, the Burning Bush at Sinai, the Enclosed Garden from the Song of Songs, and many more passages. From this wealth of support the pope's advisors singled out Genesis 3:15: "The most glorious Virgin ... was foretold by God when he said to the serpent: 'I will put enmity between you and the woman,'" a prophecy which reached fulfilment in the figure of the Woman in the Revelation of John, crowned with stars and trampling the Dragon underfoot. Luke 1:28, and specifically the phrase "full of grace" by which Gabriel greeted Mary, was another reference to her Immaculate Conception: "she was never subject to the curse and was, together with her Son, the only partaker of perpetual benediction".
Ineffabilis Deus was one of the pivotal events of the papacy of Pius, pope from 16 June 1846 to his death on 7 February 1878. Four years after the proclamation of the dogma, in 1858, the young Bernadette Soubirous said that Mary appeared to her at Lourdes in southern France, to announce that she was the Immaculate Conception; the Catholic Church later endorsed the apparition as authentic. There are other (approved) Marian apparitions in which Mary identified herself as the Immaculate Conception, for example Our Lady of Gietrzwald in 1877, Poland.
The feast day of the Immaculate Conception is December 8. The Roman Missal and the Roman Rite Liturgy of the Hours include references to Mary's immaculate conception in the feast of the Immaculate Conception. Its celebration seems to have begun in the Eastern church in the 7th century and may have spread to Ireland by the 8th, although the earliest well-attested record in the Western church is from England early in the 11th. It was suppressed there after the Norman Conquest (1066), and the first thorough exposition of the doctrine was a response to this suppression. It continued to spread through the 15th century despite accusations of heresy from the Thomists and strong objections from several prominent theologians. Beginning around 1140 Bernard of Clairvaux, a Cistercian monk, wrote to Lyons Cathedral to express his surprise and dissatisfaction that it had recently begun to be observed there, but in 1477 Pope Sixtus IV, a Franciscan Scotist and devoted Immaculist, placed it on the Roman calendar (i.e., list of church festivals and observances) via the bull Cum praexcelsa. Thereafter in 1481 and 1483, in response to the polemic writings of the prominent Thomist, Vincenzo Bandello, Pope Sixtus IV published two more bulls which forbade anybody to preach or teach against the Immaculate Conception, or for either side to accuse the other of heresy, on pains of excommunication. Pope Pius V kept the feast on the tridentine calendar but suppressed the word "immaculate". Gregory XV in 1622 prohibited any public or private assertion that Mary was conceived in sin. Urban VIII in 1624 allowed the Franciscans to establish a military order dedicated to the Virgin of the Immaculate Conception. Following the promulgation of Ineffabilis Deus the typically Franciscan phrase "immaculate conception" reasserted itself in the title and euchology (prayer formulae) of the feast. Pius IX solemnly promulgated a mass formulary drawn chiefly from one composed 400 years by a papal chamberlain at the behest of Sixtus IV, beginning "O God who by the Immaculate Conception of the Virgin".
The Roman Rite liturgical books, including the Roman Missal and the Liturgy of the Hours, included offices venerating Mary's immaculate conception on the feast of the Immaculate Conception. An example is the antiphon that begins: "Tota pulchra es, Maria, et macula originalis non est in te" ("You are all beautiful, Mary, and the original stain [of sin] is not in you". It continues: "Your clothing is white as snow, and your face is like the sun. You are all beautiful, Mary, and the original stain [of sin] is not in you. You are the glory of Jerusalem, you are the joy of Israel, you give honour to our people. You are all beautiful, Mary".) On the basis of the original Gregorian chant music, polyphonic settings have been composed by Anton Bruckner, Pablo Casals, Maurice Duruflé, Grzegorz Gerwazy Gorczycki, Ola Gjeilo, José Maurício Nunes Garcia, and Nikolaus Schapfl [de].
Other prayers honouring Mary's immaculate conception are in use outside the formal liturgy. The Immaculata prayer, composed by Maximillian Kolbe, is a prayer of entrustment to Mary as the Immaculata. A novena of prayers, with a specific prayer for each of the nine days has been composed under the title of the Immaculate Conception Novena.
Ave Maris Stella is the vesper hymn of the feast of the Immaculate Conception. The hymn Immaculate Mary, addressed to Mary as the Immaculately Conceived One, is closely associated with Lourdes.
The Immaculate Conception became a popular subject in literature, but its abstract nature meant it was late in appearing as a subject in art. During the Medieval period it was depicted as "Joachim and Anne Meeting at the Golden Gate", meaning Mary's conception through the chaste kiss of her parents at the Golden Gate in Jerusalem; the 14th and 15th centuries were the heyday for this scene, after which it was gradually replaced by more allegorical depictions featuring an adult Mary.
The definitive iconography for the depiction of "Our Lady of the Immaculate Conception" seems to have been finally established by the painter and theorist Francisco Pacheco in his "El arte de la pintura" of 1649: a beautiful young girl of 12 or 13, wearing a white tunic and blue mantle, rays of light emanating from her head ringed by twelve stars and crowned by an imperial crown, the sun behind her and the moon beneath her feet. Pacheco's iconography influenced other Spanish artists or artists active in Spain such as El Greco, Bartolomé Murillo, Diego Velázquez, and Francisco Zurbarán, who each produced a number of artistic masterpieces based on the use of these same symbols. The popularity of this particular representation of The Immaculate Conception spread across the rest of Europe, and has since remained the best known artistic depiction of the concept: in a heavenly realm, moments after her creation, the spirit of Mary (in the form of a young woman) looks up in awe at (or bows her head to) God. The moon is under her feet and a halo of twelve stars surround her head, possibly a reference to "a woman clothed with the sun" from Revelation 12:1–2. Additional imagery may include clouds, a golden light, and putti. In some paintings the putti are holding lilies and roses, flowers often associated with Mary.
Eastern Orthodoxy never accepted Augustine's specific ideas on original sin, and in consequence did not become involved in the later developments that took place in the Roman Catholic Church, including the Immaculate Conception. In 1894, when Pope Leo XIII addressed the Eastern church in his encyclical Praeclara gratulationis, Ecumenical Patriarch Anthimos, in 1895, replied with an encyclical approved by the Constantinopolitan Synod in which he stigmatised the dogmas of the Immaculate Conception and papal infallibility as "Roman novelties" and called on the Roman church to return to the faith of the early centuries. Eastern Orthodox Bishop Kallistos Ware comments that "the Latin dogma seems to us not so much erroneous as superfluous".
The Ethiopian Orthodox Tewahedo and Eritrean Orthodox Tewahedo Churches believe in the Immaculate Conception of the Theotokos. The Ethiopian Orthodox Tewahedo Church celebrates the Feast of the Immaculate Conception on Nehasie 7 (August 13).
In the mid-19th century, some Catholics who were unable to accept the doctrine of papal infallibility left the Roman Church and formed the Old Catholic Church. This movement rejects the Immaculate Conception.
Protestants overwhelmingly condemned the promulgation of Ineffabilis Deus as an exercise in papal power, and the doctrine itself as unscriptural, for it denied that all had sinned and rested on the Latin translation of Luke 1:28 (the "full of grace" passage) that the original Greek did not support. Protestants, therefore, teach that Mary was a sinner saved through grace, like all believers.
The Catholic–Lutheran dialogue's statement The One Mediator, the Saints, and Mary, issued in 1990 after seven years of study and discussion, conceded that Lutherans and Catholics remained separated "by differing views on matters such as the invocation of saints, the Immaculate Conception and the Assumption of Mary"; the final report of the Anglican–Roman Catholic International Commission (ARCIC), created in 1969 to further ecumenical progress between the Roman Catholic Church and the Anglican Communion, similarly recorded the disagreement of the Anglicans with the doctrine, although Anglo-Catholics may hold the Immaculate Conception as an optional pious belief. | [
{
"paragraph_id": 0,
"text": "The Immaculate Conception is the belief that the Virgin Mary was free of original sin from the moment of her conception. It is one of the four Marian dogmas of the Catholic Church. Debated by medieval theologians, it was not defined as a dogma until 1854, by Pope Pius IX in the papal bull Ineffabilis Deus. While the Immaculate Conception asserts Mary's freedom from original sin, the Council of Trent, held between 1545 and 1563, had previously affirmed her freedom from personal sin.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Immaculate Conception became a popular subject in literature, but its abstract nature meant it was late in appearing as a subject in works of art. The iconography of Our Lady of the Immaculate Conception shows Mary standing, with arms outstretched or hands clasped in prayer. The feast day of the Immaculate Conception is December 8.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Many Protestant churches rejected the doctrine of the Immaculate Conception as unscriptural, though some Anglicans accept it as a pious devotion. Opinions on the Immaculate Conception in Oriental Orthodoxy are divided: Shenouda III, Pope of the Coptic Orthodox Church, opposed the teaching, as did Patriarch Ignatius Zakka I of the Syriac Orthodox Church; the Eritrean and Ethiopian Orthodox Tewahedo accept it. It is not accepted by Eastern Orthodoxy due to differences in the understanding of original sin, although they do affirm Mary's purity and preservation from sin. Patriarch Anthimus VII of Constantinople (1827–1913) characterized the dogma of the Immaculate Conception as a \"Roman novelty\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "Anne, the mother of Mary, first appears in the 2nd-century apocryphal Gospel of James, and the author created his story by drawing on Greek tales of the childhood of heroes and on the Old Testament story of Hannah (hence the name Anna/Anne), the mother of the biblical Samuel. Anne and her husband, Joachim, are infertile, but God hears their prayers and Mary is conceived. Within the Gospel of James, the conception occurs without sexual intercourse between Anne and Joachim, which fits well with the Gospel of James' persistent emphasis on Mary's sacred purity, but the story does not advance the idea of an immaculate conception. The author of the Gospel of James may have based this account of Mary's conception on that of John the Baptist as recounted in the Gospel of Luke. The Eastern Orthodox Church holds that \"Mary is conceived by her parents as we are all conceived\".",
"title": "History"
},
{
"paragraph_id": 4,
"text": "According to the Catholic Encyclopedia, Justin Martyr, Irenaeus, and Cyril of Jerusalem developed the idea of Mary as the New Eve, drawing comparison to Eve, while yet immaculate and incorrupt — that is to say, not subject to original sin. The encyclopedia adds that Ephrem the Syrian said she was as innocent as Eve before the Fall.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Ambrose asserted Mary's incorruptibility, attributing her virginity to grace and immunity from sin. Severus, Bishop of Antioch, concurred affirming Mary's purity and immaculateness. John Damascene extended the supernatural influence of God to Mary's parents, suggesting they were purified by the Holy Spirit during her generation. According to Damascene, even the material of Mary's origin was deemed pure and holy. This perspective, which emphasized an immaculate active generation and the sanctity of the conceptio carnis, found resonance among some Western authors. Notably, the Greek Fathers did not explicitly discuss the Immaculate Conception.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "By the 4th century the idea that Mary was free from sin was generally more widespread, but original sin raised the question of whether she was also free of the sin passed down from Adam. The question became acute when the feast of her conception began to be celebrated in England in the 11th century, and the opponents of the feast of Mary's conception brought forth the objection that as sexual intercourse is sinful, to celebrate Mary's conception was to celebrate a sinful event. The feast of Mary's conception originated in the Eastern Church in the 7th century, reached England in the 11th, and from there spread to Europe, where it was given official approval in 1477 and extended to the whole church in 1693; the word \"immaculate\" was not officially added to the name of the feast until 1854.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The doctrine of the Immaculate Conception caused a virtual civil war between Franciscans and Dominicans during the Middle Ages, with Franciscan 'Scotists' in its favour and Dominican 'Thomists' against it. The English ecclesiastic and scholar Eadmer (c. 1060 – c. 1126) reasoned that it was possible that Mary was conceived without original sin in view of God's omnipotence, and that it was also appropriate in view of her role as Mother of God: Potuit, decuit, fecit, \"it was possible, it was fitting, therefore it was done\". Others, including Bernard of Clairvaux (1090–1153) and Thomas Aquinas (1225–1274), objected that if Mary were free of original sin at her conception then she would have no need of redemption, making Christ's saving redemption superfluous; they were answered by Duns Scotus (1264–1308), who \"developed the idea of preservative redemption as being a more perfect one: to have been preserved free from original sin was a greater grace than to be set free from sin\". In 1439, the Council of Basel, in schism with Pope Eugene IV who resided at the Council of Florence, declared the Immaculate Conception a \"pious opinion\" consistent with faith and Scripture; the Council of Trent, held in several sessions in the early 1500s, made no explicit declaration on the subject but exempted her from the universality of original sin; and also affirmed that she remained during all her life free from all stain of sin, even the venial one.; by 1571 the revised Roman Breviary set out an elaborate celebration of the Feast of the Immaculate Conception on 8 December.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The eventual creation of the dogma was due more to popular devotion than scholarship. The Immaculate Conception became a popular subject in literature and art, and some devotees went so far as to hold that Anne had conceived Mary by kissing her husband Joachim, and that Anne's father and grandmother had likewise been conceived without sexual intercourse, although Bridget of Sweden (c. 1303–1373) told how Mary herself had revealed to her that Anne and Joachim conceived their daughter through a sexual union which was sinless because it was pure and free of sexual lust.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In the 16th and especially the 17th centuries there was a proliferation of Immaculatist devotion in Spain, leading the Habsburg monarchs to demand that the papacy elevate the belief to the status of dogma. In France in 1830 Catherine Labouré (May 2, 1806 – December 31, 1876) saw a vision of Mary standing on a globe while a voice commanded her to have a medal made in imitation of what she saw. The medal said \"O Mary, conceived without sin, pray for us who have recourse to thee\", which was a confirmation of Mary herself that she was conceived without sin, confirming the Immaculate Conception. Her vision marked the beginning of a great 19th-century Marian revival.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1849 Pope Pius IX issued the encyclical Ubi primum soliciting the bishops of the church for their views on whether the doctrine should be defined as dogma; ninety percent of those who responded were supportive, although the Archbishop of Paris, Marie-Dominique-Auguste Sibour, warned that the Immaculate Conception \"could be proved neither from the Scriptures nor from tradition\", and in 1854 the Immaculate Conception dogma was proclaimed with the bull Ineffabilis Deus.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "We declare, pronounce, and define that the doctrine which holds that the most Blessed Virgin Mary, in the first instance of her conception, by a singular grace and privilege granted by Almighty God, in view of the merits of Jesus Christ, the Saviour of the human race, was preserved free from all stain of original sin, is a doctrine revealed by God and therefore to be believed firmly and constantly by all the faithful.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Dom Prosper Guéranger, Abbot of Solesmes Abbey, who had been one of the main promoters of the dogmatic statement, wrote Mémoire sur l'Immaculée Conception, explaining what he saw as its basis:",
"title": "History"
},
{
"paragraph_id": 13,
"text": "For the belief to be defined as a dogma of faith [...] it is necessary that the Immaculate Conception form part of Revelation, expressed in Scripture or Tradition, or be implied in beliefs previously defined. Needed, afterward, is that it be proposed to the faith of the faithful through the teaching of the ordinary magisterium. Finally, it is necessary that it be attested by the liturgy, and the Fathers and Doctors of the Church.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Guéranger maintained that these conditions were met and that the definition was therefore possible. Ineffabilis Deus found the Immaculate Conception in the Ark of Salvation (Noah's Ark), Jacob's Ladder, the Burning Bush at Sinai, the Enclosed Garden from the Song of Songs, and many more passages. From this wealth of support the pope's advisors singled out Genesis 3:15: \"The most glorious Virgin ... was foretold by God when he said to the serpent: 'I will put enmity between you and the woman,'\" a prophecy which reached fulfilment in the figure of the Woman in the Revelation of John, crowned with stars and trampling the Dragon underfoot. Luke 1:28, and specifically the phrase \"full of grace\" by which Gabriel greeted Mary, was another reference to her Immaculate Conception: \"she was never subject to the curse and was, together with her Son, the only partaker of perpetual benediction\".",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Ineffabilis Deus was one of the pivotal events of the papacy of Pius, pope from 16 June 1846 to his death on 7 February 1878. Four years after the proclamation of the dogma, in 1858, the young Bernadette Soubirous said that Mary appeared to her at Lourdes in southern France, to announce that she was the Immaculate Conception; the Catholic Church later endorsed the apparition as authentic. There are other (approved) Marian apparitions in which Mary identified herself as the Immaculate Conception, for example Our Lady of Gietrzwald in 1877, Poland.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The feast day of the Immaculate Conception is December 8. The Roman Missal and the Roman Rite Liturgy of the Hours include references to Mary's immaculate conception in the feast of the Immaculate Conception. Its celebration seems to have begun in the Eastern church in the 7th century and may have spread to Ireland by the 8th, although the earliest well-attested record in the Western church is from England early in the 11th. It was suppressed there after the Norman Conquest (1066), and the first thorough exposition of the doctrine was a response to this suppression. It continued to spread through the 15th century despite accusations of heresy from the Thomists and strong objections from several prominent theologians. Beginning around 1140 Bernard of Clairvaux, a Cistercian monk, wrote to Lyons Cathedral to express his surprise and dissatisfaction that it had recently begun to be observed there, but in 1477 Pope Sixtus IV, a Franciscan Scotist and devoted Immaculist, placed it on the Roman calendar (i.e., list of church festivals and observances) via the bull Cum praexcelsa. Thereafter in 1481 and 1483, in response to the polemic writings of the prominent Thomist, Vincenzo Bandello, Pope Sixtus IV published two more bulls which forbade anybody to preach or teach against the Immaculate Conception, or for either side to accuse the other of heresy, on pains of excommunication. Pope Pius V kept the feast on the tridentine calendar but suppressed the word \"immaculate\". Gregory XV in 1622 prohibited any public or private assertion that Mary was conceived in sin. Urban VIII in 1624 allowed the Franciscans to establish a military order dedicated to the Virgin of the Immaculate Conception. Following the promulgation of Ineffabilis Deus the typically Franciscan phrase \"immaculate conception\" reasserted itself in the title and euchology (prayer formulae) of the feast. Pius IX solemnly promulgated a mass formulary drawn chiefly from one composed 400 years by a papal chamberlain at the behest of Sixtus IV, beginning \"O God who by the Immaculate Conception of the Virgin\".",
"title": "Feast, patronages and disputes"
},
{
"paragraph_id": 17,
"text": "The Roman Rite liturgical books, including the Roman Missal and the Liturgy of the Hours, included offices venerating Mary's immaculate conception on the feast of the Immaculate Conception. An example is the antiphon that begins: \"Tota pulchra es, Maria, et macula originalis non est in te\" (\"You are all beautiful, Mary, and the original stain [of sin] is not in you\". It continues: \"Your clothing is white as snow, and your face is like the sun. You are all beautiful, Mary, and the original stain [of sin] is not in you. You are the glory of Jerusalem, you are the joy of Israel, you give honour to our people. You are all beautiful, Mary\".) On the basis of the original Gregorian chant music, polyphonic settings have been composed by Anton Bruckner, Pablo Casals, Maurice Duruflé, Grzegorz Gerwazy Gorczycki, Ola Gjeilo, José Maurício Nunes Garcia, and Nikolaus Schapfl [de].",
"title": "Prayers and hymns"
},
{
"paragraph_id": 18,
"text": "Other prayers honouring Mary's immaculate conception are in use outside the formal liturgy. The Immaculata prayer, composed by Maximillian Kolbe, is a prayer of entrustment to Mary as the Immaculata. A novena of prayers, with a specific prayer for each of the nine days has been composed under the title of the Immaculate Conception Novena.",
"title": "Prayers and hymns"
},
{
"paragraph_id": 19,
"text": "Ave Maris Stella is the vesper hymn of the feast of the Immaculate Conception. The hymn Immaculate Mary, addressed to Mary as the Immaculately Conceived One, is closely associated with Lourdes.",
"title": "Prayers and hymns"
},
{
"paragraph_id": 20,
"text": "The Immaculate Conception became a popular subject in literature, but its abstract nature meant it was late in appearing as a subject in art. During the Medieval period it was depicted as \"Joachim and Anne Meeting at the Golden Gate\", meaning Mary's conception through the chaste kiss of her parents at the Golden Gate in Jerusalem; the 14th and 15th centuries were the heyday for this scene, after which it was gradually replaced by more allegorical depictions featuring an adult Mary.",
"title": "Artistic representation"
},
{
"paragraph_id": 21,
"text": "The definitive iconography for the depiction of \"Our Lady of the Immaculate Conception\" seems to have been finally established by the painter and theorist Francisco Pacheco in his \"El arte de la pintura\" of 1649: a beautiful young girl of 12 or 13, wearing a white tunic and blue mantle, rays of light emanating from her head ringed by twelve stars and crowned by an imperial crown, the sun behind her and the moon beneath her feet. Pacheco's iconography influenced other Spanish artists or artists active in Spain such as El Greco, Bartolomé Murillo, Diego Velázquez, and Francisco Zurbarán, who each produced a number of artistic masterpieces based on the use of these same symbols. The popularity of this particular representation of The Immaculate Conception spread across the rest of Europe, and has since remained the best known artistic depiction of the concept: in a heavenly realm, moments after her creation, the spirit of Mary (in the form of a young woman) looks up in awe at (or bows her head to) God. The moon is under her feet and a halo of twelve stars surround her head, possibly a reference to \"a woman clothed with the sun\" from Revelation 12:1–2. Additional imagery may include clouds, a golden light, and putti. In some paintings the putti are holding lilies and roses, flowers often associated with Mary.",
"title": "Artistic representation"
},
{
"paragraph_id": 22,
"text": "Eastern Orthodoxy never accepted Augustine's specific ideas on original sin, and in consequence did not become involved in the later developments that took place in the Roman Catholic Church, including the Immaculate Conception. In 1894, when Pope Leo XIII addressed the Eastern church in his encyclical Praeclara gratulationis, Ecumenical Patriarch Anthimos, in 1895, replied with an encyclical approved by the Constantinopolitan Synod in which he stigmatised the dogmas of the Immaculate Conception and papal infallibility as \"Roman novelties\" and called on the Roman church to return to the faith of the early centuries. Eastern Orthodox Bishop Kallistos Ware comments that \"the Latin dogma seems to us not so much erroneous as superfluous\".",
"title": "Other denominations"
},
{
"paragraph_id": 23,
"text": "The Ethiopian Orthodox Tewahedo and Eritrean Orthodox Tewahedo Churches believe in the Immaculate Conception of the Theotokos. The Ethiopian Orthodox Tewahedo Church celebrates the Feast of the Immaculate Conception on Nehasie 7 (August 13).",
"title": "Other denominations"
},
{
"paragraph_id": 24,
"text": "In the mid-19th century, some Catholics who were unable to accept the doctrine of papal infallibility left the Roman Church and formed the Old Catholic Church. This movement rejects the Immaculate Conception.",
"title": "Other denominations"
},
{
"paragraph_id": 25,
"text": "Protestants overwhelmingly condemned the promulgation of Ineffabilis Deus as an exercise in papal power, and the doctrine itself as unscriptural, for it denied that all had sinned and rested on the Latin translation of Luke 1:28 (the \"full of grace\" passage) that the original Greek did not support. Protestants, therefore, teach that Mary was a sinner saved through grace, like all believers.",
"title": "Other denominations"
},
{
"paragraph_id": 26,
"text": "The Catholic–Lutheran dialogue's statement The One Mediator, the Saints, and Mary, issued in 1990 after seven years of study and discussion, conceded that Lutherans and Catholics remained separated \"by differing views on matters such as the invocation of saints, the Immaculate Conception and the Assumption of Mary\"; the final report of the Anglican–Roman Catholic International Commission (ARCIC), created in 1969 to further ecumenical progress between the Roman Catholic Church and the Anglican Communion, similarly recorded the disagreement of the Anglicans with the doctrine, although Anglo-Catholics may hold the Immaculate Conception as an optional pious belief.",
"title": "Other denominations"
}
]
| The Immaculate Conception is the belief that the Virgin Mary was free of original sin from the moment of her conception. It is one of the four Marian dogmas of the Catholic Church. Debated by medieval theologians, it was not defined as a dogma until 1854, by Pope Pius IX in the papal bull Ineffabilis Deus. While the Immaculate Conception asserts Mary's freedom from original sin, the Council of Trent, held between 1545 and 1563, had previously affirmed her freedom from personal sin. The Immaculate Conception became a popular subject in literature, but its abstract nature meant it was late in appearing as a subject in works of art. The iconography of Our Lady of the Immaculate Conception shows Mary standing, with arms outstretched or hands clasped in prayer. The feast day of the Immaculate Conception is December 8. Many Protestant churches rejected the doctrine of the Immaculate Conception as unscriptural, though some Anglicans accept it as a pious devotion. Opinions on the Immaculate Conception in Oriental Orthodoxy are divided: Shenouda III, Pope of the Coptic Orthodox Church, opposed the teaching, as did Patriarch Ignatius Zakka I of the Syriac Orthodox Church; the Eritrean and Ethiopian Orthodox Tewahedo accept it. It is not accepted by Eastern Orthodoxy due to differences in the understanding of original sin, although they do affirm Mary's purity and preservation from sin. Patriarch Anthimus VII of Constantinople (1827–1913) characterized the dogma of the Immaculate Conception as a "Roman novelty". | 2001-11-19T08:13:25Z | 2023-12-26T15:09:23Z | [
"Template:Citation",
"Template:Refend",
"Template:Virgin Mary",
"Template:Liturgical year of the Catholic Church",
"Template:Catholic marian prayers sidebar",
"Template:Cite book",
"Template:Cite AV media",
"Template:Authority control",
"Template:Main",
"Template:Cbignore",
"Template:Refbegin",
"Template:Short description",
"Template:Sfn",
"Template:History of the Catholic Church",
"Template:Catholic saints",
"Template:Redirect-several",
"Template:Use mdy dates",
"Template:Citation-attribution",
"Template:Cite journal",
"Template:Commons category",
"Template:Circa",
"Template:' \"",
"Template:Further",
"Template:Madonna styles",
"Template:Ill",
"Template:Reflist",
"Template:Our Lady of Lourdes",
"Template:About",
"Template:Portal",
"Template:Cite web",
"Template:Infobox saint",
"Template:Catholicism"
]
| https://en.wikipedia.org/wiki/Immaculate_Conception |
15,260 | Islands of the North Atlantic | IONA (Islands of the North Atlantic) is an acronym suggested in 1980 by Sir John Biggs-Davison to refer to a loose linkage of the Channel Islands (Guernsey and Jersey), Great Britain (England, Scotland, and Wales), Ireland (Northern Ireland and the Republic of Ireland), and the Isle of Man, similar to the present day British–Irish Council. Its intended purpose was as a more politically acceptable alternative to the British Isles, which is disliked by some people in Ireland.
The neologism has been criticised on the grounds that it excludes most of the islands in the North Atlantic, and also that the only island referred to by the term that is actually in the North Atlantic Ocean is Ireland (Great Britain is in fact in between the Irish Sea and The North Sea.) In the context of the Northern Irish peace process, during the negotiation of the Good Friday Agreement, IONA was unsuccessfully proposed as a neutral name for the proposed council.
One feature of this name is that IONA has the same spelling as the island of Iona which is off the coast of Scotland, but with which Irish people have strong cultural associations. It is therefore a name with which people of both main islands might identify. Taoiseach Bertie Ahern noted the symbolism in a 2006 address in Edinburgh:
[The Island of] Iona is a powerful symbol of relationships between these islands, with its ethos of service not dominion. Iona also radiated out towards the Europe of the Dark Ages, not to mention Pagan England at Lindisfarne. The British-Irish Council is the expression of a relationship that at the origin of the Anglo-Irish process in 1981 was sometimes given the name Iona, islands of the North Atlantic, and sometimes Council of the Isles, with its evocation of the Lords of the Isles of the 14th and 15th centuries who spanned the North Channel. In the 17th century, Highland warriors and persecuted Presbyterian Ministers criss-crossed the North Channel.
In a Dáil Éireann debate, Proinsias De Rossa was less enthusiastic:
The acronym IONA is a useful way of addressing the coming together of these two islands. However, the island of Iona is probably a green heaven in that nobody lives on it and therefore it cannot be polluted in any way.
The term IONA is used by the World Universities Debating Championship. IONA is one of the regions that appoint a representative onto the committee of the World Universities Debating Council. Greenland, the Faroe Islands and Iceland are included in the definition of IONA used in this context, while Newfoundland and Prince Edward Island are in the North American region. However, none of these islands have yet participated in the World Universities Debating Championships. Otherwise, the term has achieved very little popular usage in any context. | [
{
"paragraph_id": 0,
"text": "IONA (Islands of the North Atlantic) is an acronym suggested in 1980 by Sir John Biggs-Davison to refer to a loose linkage of the Channel Islands (Guernsey and Jersey), Great Britain (England, Scotland, and Wales), Ireland (Northern Ireland and the Republic of Ireland), and the Isle of Man, similar to the present day British–Irish Council. Its intended purpose was as a more politically acceptable alternative to the British Isles, which is disliked by some people in Ireland.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The neologism has been criticised on the grounds that it excludes most of the islands in the North Atlantic, and also that the only island referred to by the term that is actually in the North Atlantic Ocean is Ireland (Great Britain is in fact in between the Irish Sea and The North Sea.) In the context of the Northern Irish peace process, during the negotiation of the Good Friday Agreement, IONA was unsuccessfully proposed as a neutral name for the proposed council.",
"title": ""
},
{
"paragraph_id": 2,
"text": "One feature of this name is that IONA has the same spelling as the island of Iona which is off the coast of Scotland, but with which Irish people have strong cultural associations. It is therefore a name with which people of both main islands might identify. Taoiseach Bertie Ahern noted the symbolism in a 2006 address in Edinburgh:",
"title": ""
},
{
"paragraph_id": 3,
"text": "[The Island of] Iona is a powerful symbol of relationships between these islands, with its ethos of service not dominion. Iona also radiated out towards the Europe of the Dark Ages, not to mention Pagan England at Lindisfarne. The British-Irish Council is the expression of a relationship that at the origin of the Anglo-Irish process in 1981 was sometimes given the name Iona, islands of the North Atlantic, and sometimes Council of the Isles, with its evocation of the Lords of the Isles of the 14th and 15th centuries who spanned the North Channel. In the 17th century, Highland warriors and persecuted Presbyterian Ministers criss-crossed the North Channel.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In a Dáil Éireann debate, Proinsias De Rossa was less enthusiastic:",
"title": ""
},
{
"paragraph_id": 5,
"text": "The acronym IONA is a useful way of addressing the coming together of these two islands. However, the island of Iona is probably a green heaven in that nobody lives on it and therefore it cannot be polluted in any way.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The term IONA is used by the World Universities Debating Championship. IONA is one of the regions that appoint a representative onto the committee of the World Universities Debating Council. Greenland, the Faroe Islands and Iceland are included in the definition of IONA used in this context, while Newfoundland and Prince Edward Island are in the North American region. However, none of these islands have yet participated in the World Universities Debating Championships. Otherwise, the term has achieved very little popular usage in any context.",
"title": ""
}
]
| IONA is an acronym suggested in 1980 by Sir John Biggs-Davison to refer to a loose linkage of the Channel Islands, Great Britain, Ireland, and the Isle of Man, similar to the present day British–Irish Council. Its intended purpose was as a more politically acceptable alternative to the British Isles, which is disliked by some people in Ireland. The neologism has been criticised on the grounds that it excludes most of the islands in the North Atlantic, and also that the only island referred to by the term that is actually in the North Atlantic Ocean is Ireland In the context of the Northern Irish peace process, during the negotiation of the Good Friday Agreement, IONA was unsuccessfully proposed as a neutral name for the proposed council. One feature of this name is that IONA has the same spelling as the island of Iona which is off the coast of Scotland, but with which Irish people have strong cultural associations. It is therefore a name with which people of both main islands might identify. Taoiseach Bertie Ahern noted the symbolism in a 2006 address in Edinburgh: In a Dáil Éireann debate, Proinsias De Rossa was less enthusiastic: The term IONA is used by the World Universities Debating Championship. IONA is one of the regions that appoint a representative onto the committee of the World Universities Debating Council. Greenland, the Faroe Islands and Iceland are included in the definition of IONA used in this context, while Newfoundland and Prince Edward Island are in the North American region. However, none of these islands have yet participated in the World Universities Debating Championships. Otherwise, the term has achieved very little popular usage in any context. | 2023-02-08T03:37:48Z | [
"Template:Redirect",
"Template:Cite news",
"Template:Cite book",
"Template:Cite web",
"Template:Webarchive",
"Template:Short description",
"Template:About",
"Template:Reflist",
"Template:British Isles"
]
| https://en.wikipedia.org/wiki/Islands_of_the_North_Atlantic |
|
15,261 | Intel DX4 | IntelDX4 is a clock-tripled i486 microprocessor with 16 KB level 1 cache. Intel named it DX4 (rather than DX3) as a consequence of litigation with AMD over trademarks. The product was officially named IntelDX4, but OEMs continued using the i486 naming convention.
Intel produced IntelDX4s with two clock speed steppings: A 75-MHz version (3× 25 MHz multiplier), and a 100-MHz version (3× 33.3 MHz). Both chips were released in March 1994. A version of IntelDX4 featuring write-back cache was released in October 1994. The original write-through versions of the chip are marked with a laser-embossed “&E,” while the write-back-enabled versions are marked “&EW.” i486 OverDrive editions of IntelDX4 had locked multipliers, and therefore can only run at 3× the external clock speed. The 100-MHz model of the processor had an iCOMP rating of 435, while the 75-MHz processor had a rating of 319. IntelDX4 was an OEM-only product, but the DX4 Overdrive could be purchased at a retail store.
The IntelDX4 microprocessor is mostly pin-compatible with the 80486, but requires a lower 3.3-V supply. Normal 80486 and DX2 processors use a 5-V supply; plugging a DX4 into an unmodified socket will destroy the processor. Motherboards lacking support for the 3.3-V CPUs can sometimes make use of them using a voltage regulator module (VRM) that fits between the socket and the CPU. The DX4 OverDrive CPUs have VRMs built in. | [
{
"paragraph_id": 0,
"text": "IntelDX4 is a clock-tripled i486 microprocessor with 16 KB level 1 cache. Intel named it DX4 (rather than DX3) as a consequence of litigation with AMD over trademarks. The product was officially named IntelDX4, but OEMs continued using the i486 naming convention.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Intel produced IntelDX4s with two clock speed steppings: A 75-MHz version (3× 25 MHz multiplier), and a 100-MHz version (3× 33.3 MHz). Both chips were released in March 1994. A version of IntelDX4 featuring write-back cache was released in October 1994. The original write-through versions of the chip are marked with a laser-embossed “&E,” while the write-back-enabled versions are marked “&EW.” i486 OverDrive editions of IntelDX4 had locked multipliers, and therefore can only run at 3× the external clock speed. The 100-MHz model of the processor had an iCOMP rating of 435, while the 75-MHz processor had a rating of 319. IntelDX4 was an OEM-only product, but the DX4 Overdrive could be purchased at a retail store.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The IntelDX4 microprocessor is mostly pin-compatible with the 80486, but requires a lower 3.3-V supply. Normal 80486 and DX2 processors use a 5-V supply; plugging a DX4 into an unmodified socket will destroy the processor. Motherboards lacking support for the 3.3-V CPUs can sometimes make use of them using a voltage regulator module (VRM) that fits between the socket and the CPU. The DX4 OverDrive CPUs have VRMs built in.",
"title": ""
}
]
| IntelDX4 is a clock-tripled i486 microprocessor with 16 KB level 1 cache. Intel named it DX4 as a consequence of litigation with AMD over trademarks. The product was officially named IntelDX4, but OEMs continued using the i486 naming convention. Intel produced IntelDX4s with two clock speed steppings: A 75-MHz version, and a 100-MHz version. Both chips were released in March 1994. A version of IntelDX4 featuring write-back cache was released in October 1994. The original write-through versions of the chip are marked with a laser-embossed “&E,” while the write-back-enabled versions are marked “&EW.” i486 OverDrive editions of IntelDX4 had locked multipliers, and therefore can only run at 3× the external clock speed. The 100-MHz model of the processor had an iCOMP rating of 435, while the 75-MHz processor had a rating of 319. IntelDX4 was an OEM-only product, but the DX4 Overdrive could be purchased at a retail store. The IntelDX4 microprocessor is mostly pin-compatible with the 80486, but requires a lower 3.3-V supply. Normal 80486 and DX2 processors use a 5-V supply; plugging a DX4 into an unmodified socket will destroy the processor. Motherboards lacking support for the 3.3-V CPUs can sometimes make use of them using a voltage regulator module (VRM) that fits between the socket and the CPU. The DX4 OverDrive CPUs have VRMs built in. | 2023-06-23T15:41:55Z | [
"Template:Reflist",
"Template:Bare URL PDF",
"Template:Intel processors"
]
| https://en.wikipedia.org/wiki/Intel_DX4 |
|
15,264 | Iapetus (disambiguation) | Iapetus is a Titan in Greek mythology.
Iapetus /aɪˈæpɪtəs/ may also refer to: | [
{
"paragraph_id": 0,
"text": "Iapetus is a Titan in Greek mythology.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Iapetus /aɪˈæpɪtəs/ may also refer to:",
"title": ""
}
]
| Iapetus is a Titan in Greek mythology. Iapetus may also refer to: Iapetus (moon), one of the planet Saturn's moons, named for the mythological Titan
Iapetus Ocean, an ancient ocean between the paleocontinents Laurentia and Baltica
Iapetus suture, line of closure of the Iapetus Ocean | 2018-04-26T04:26:02Z | [
"Template:IPAc-en",
"Template:Disambiguation",
"Template:Wiktionary"
]
| https://en.wikipedia.org/wiki/Iapetus_(disambiguation) |
|
15,266 | Interactive Fiction Competition | The Interactive Fiction Competition (also known as IFComp) is one of several annual competitions for works of interactive fiction. It has been held since 1995. It is intended for fairly short games, as judges are only allowed to spend two hours playing a game before deciding how many points to award it. The competition has been described as the "Super Bowl" of interactive fiction.
Since 2016 it is operated by the Interactive Fiction Technology Foundation (IFTF).
In 2016, operation of the competition was taken over by the Interactive Fiction Technology Foundation.
The lead organizer 2014-2017 was Jason McIntosh, and in 2018 it was Jacqueline Ashwell.
Although the first competition had separate sections for Inform and TADS games, subsequent competitions have not been divided into sections and are open to games produced by any method, provided that the software used to play the game is freely available.
In addition to the main competition, the entries take part in the Miss Congeniality contest, where the participating authors vote for three games (not including their own). This was started in 1998 to distribute that year's surplus prizes; this additional contest has remained unchanged since then, even without the original reason for its existence.
There is also a 'Golden Banana of Discord' side contest; the distinction is given to the entry with scores with the highest standard deviation.
The competition differs from the XYZZY Awards, as authors must specifically submit games to the Interactive Fiction Competition, but all games released in the past year are eligible for the XYZZY Awards. Many games win awards in both competitions.
Anyone can judge the games. Because anyone can judge and participate in the competition, there is a rule that "All entries must cost nothing for judges to play".
The competition has rules for judges, authors and everyone to ensure that everyone agrees on the purpose, scope, and spirit of the competition.
Anyone can donate a prize. Almost always, there are enough prizes donated that anyone who enters will get one.
The following is a list of first place winners to date:
Only two competitors have won more than once: Paul O'Brian, winning in 2002 and 2004, and Steph Cherrywell, winning in 2015 and 2019.
A reviewer for The A.V. Club said of the 2008 competition, "Once again, the IF Competition delivers some of the best writing in games." The 2008 competition was described as containing "some real standouts both in quality of puzzles and a willingness to stretch the definition of text adventures/interactive fiction." | [
{
"paragraph_id": 0,
"text": "The Interactive Fiction Competition (also known as IFComp) is one of several annual competitions for works of interactive fiction. It has been held since 1995. It is intended for fairly short games, as judges are only allowed to spend two hours playing a game before deciding how many points to award it. The competition has been described as the \"Super Bowl\" of interactive fiction.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since 2016 it is operated by the Interactive Fiction Technology Foundation (IFTF).",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 2016, operation of the competition was taken over by the Interactive Fiction Technology Foundation.",
"title": "Organization"
},
{
"paragraph_id": 3,
"text": "The lead organizer 2014-2017 was Jason McIntosh, and in 2018 it was Jacqueline Ashwell.",
"title": "Organization"
},
{
"paragraph_id": 4,
"text": "Although the first competition had separate sections for Inform and TADS games, subsequent competitions have not been divided into sections and are open to games produced by any method, provided that the software used to play the game is freely available.",
"title": "Categories"
},
{
"paragraph_id": 5,
"text": "In addition to the main competition, the entries take part in the Miss Congeniality contest, where the participating authors vote for three games (not including their own). This was started in 1998 to distribute that year's surplus prizes; this additional contest has remained unchanged since then, even without the original reason for its existence.",
"title": "Categories"
},
{
"paragraph_id": 6,
"text": "There is also a 'Golden Banana of Discord' side contest; the distinction is given to the entry with scores with the highest standard deviation.",
"title": "Categories"
},
{
"paragraph_id": 7,
"text": "The competition differs from the XYZZY Awards, as authors must specifically submit games to the Interactive Fiction Competition, but all games released in the past year are eligible for the XYZZY Awards. Many games win awards in both competitions.",
"title": "Eligibility"
},
{
"paragraph_id": 8,
"text": "Anyone can judge the games. Because anyone can judge and participate in the competition, there is a rule that \"All entries must cost nothing for judges to play\".",
"title": "Judging"
},
{
"paragraph_id": 9,
"text": "The competition has rules for judges, authors and everyone to ensure that everyone agrees on the purpose, scope, and spirit of the competition.",
"title": "Rules"
},
{
"paragraph_id": 10,
"text": "Anyone can donate a prize. Almost always, there are enough prizes donated that anyone who enters will get one.",
"title": "Prizes"
},
{
"paragraph_id": 11,
"text": "The following is a list of first place winners to date:",
"title": "Winners"
},
{
"paragraph_id": 12,
"text": "Only two competitors have won more than once: Paul O'Brian, winning in 2002 and 2004, and Steph Cherrywell, winning in 2015 and 2019.",
"title": "Winners"
},
{
"paragraph_id": 13,
"text": "A reviewer for The A.V. Club said of the 2008 competition, \"Once again, the IF Competition delivers some of the best writing in games.\" The 2008 competition was described as containing \"some real standouts both in quality of puzzles and a willingness to stretch the definition of text adventures/interactive fiction.\"",
"title": "Reception"
}
]
| The Interactive Fiction Competition is one of several annual competitions for works of interactive fiction. It has been held since 1995. It is intended for fairly short games, as judges are only allowed to spend two hours playing a game before deciding how many points to award it. The competition has been described as the "Super Bowl" of interactive fiction. Since 2016 it is operated by the Interactive Fiction Technology Foundation (IFTF). | 2001-11-21T18:01:17Z | 2023-12-12T21:42:04Z | [
"Template:Short description",
"Template:Div col",
"Template:Div col end",
"Template:Cite web"
]
| https://en.wikipedia.org/wiki/Interactive_Fiction_Competition |
15,267 | Immunity | Immunity may refer to: | [
{
"paragraph_id": 0,
"text": "Immunity may refer to:",
"title": ""
}
]
| Immunity may refer to: | 2023-06-28T14:12:49Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Immunity |
|
15,268 | Inquests in England and Wales | Inquests in England and Wales are held into sudden or unexplained deaths and also into the circumstances of and discovery of a certain class of valuable artefacts known as "treasure trove". In England and Wales, inquests are the responsibility of a coroner, who operates under the jurisdiction of the Coroners and Justice Act 2009. In some circumstances where an inquest cannot view or hear all the evidence, it may be suspended and a public inquiry held with the consent of the Home Secretary.
There is a general duty upon every person to report a death to the coroner if an inquest is likely to be required. However, this duty is largely unenforceable in practice and the duty falls on the responsible registrar. The registrar must report a death where:
The coroner must hold an inquest where the death is:
Where the cause of death is unknown, the coroner may order a post mortem examination in order to determine whether the death was violent. If the death is found to be non-violent, an inquest is unnecessary.
In 2004 in England and Wales, there were 514,000 deaths of which 225,500 were referred to the coroner. Of those, 115,800 resulted in post-mortem examinations and there were 28,300 inquests, 570 with a jury. In 2014 the Royal College of Pathologists claimed that up to 10,000 deaths a year recorded as being from natural causes should have been investigated by inquests. They were particularly concerned about people whose death occurred as a result of medical errors. "We believe a medical examiner would have been alerted to what was going on in Mid-Staffordshire long before this long list of avoidable deaths reached the total it did," said Archie Prentice, the pathologists' president.
A coroner must summon a jury for an inquest if the death was not a result of natural causes and occurred when the deceased was in state custody (for example in prison, police custody, or whilst detained under the Mental Health Act 1983); or if it was the result of an act or omission of a police officer; or if it was a result of a notifiable accident, poisoning or disease. The senior coroner can also call a jury at his or her own discretion. This discretion has been heavily litigated in light of the Human Rights Act 1998, which means that juries are required now in a broader range of situations than expressly required by statute.
The purpose of the inquest is to answer four questions:
Evidence must be solely for the purpose of answering these questions and no other evidence is admitted. It is not for the inquest to ascertain "how the deceased died" or "in what broad circumstances", but "how the deceased came by his death", a more limited question. Moreover, it is not the purpose of the inquest to determine, or appear to determine, criminal or civil liability, to apportion guilt or attribute blame. For example, where a prisoner hanged himself in a cell, he came by his death by hanging and it was not the role of the inquest to enquire into the broader circumstances such as the alleged neglect of the prison authorities that might have contributed to his state of mind or given him the opportunity. However, the inquest should set out as many of the facts as the public interest requires.
Under the terms of article 2 of the European Convention of Human Rights, governments are required to "establish a framework of laws, precautions, procedures and means of enforcement which will, to the greatest extent reasonably practicable, protect life". The European Court of Human Rights has interpreted this as mandating independent official investigation of any death where public servants may be implicated. Since the Human Rights Act 1998 came into force, in those cases alone, the inquest is now to consider the broader question "by what means and in what circumstances".
In disasters, such as the 1987 King's Cross fire, a single inquest may be held into several deaths.
Inquests are governed by the Coroners Rules. The coroner gives notice to near relatives, those entitled to examine witnesses and those whose conduct is likely to be scrutinised. Inquests are held in public except where there are real issues and substantial of national security but only the portions which relate to national security will be held behind closed doors.
Individuals with an interest in the proceedings, such as relatives of the deceased, individuals appearing as witnesses, and organisations or individuals who may face some responsibility in the death of the individual, may be represented by a legal professional be that a solicitor or barrister at the discretion of the coroner. Witnesses may be compelled to testify subject to the privilege against self-incrimination.
If there are matters of national security or matters which relate to sensitive matters then under Schedule 1 of the Coroners and Justice Act 2009 an inquest may be suspended and replaced by a public inquiry under s.2 of the Inquiries Act 2005. This can only be ordered by the Home Secretary and must be announced to Parliament with the coroner in charge being informed and the next of kin being informed. The next of kin and coroner can appeal the decision of the Home Secretary.
The following conclusions (formerly called verdicts) are not mandatory but are strongly recommended:
In 2004, 37% of inquests recorded an outcome of death by accident / misadventure, 21% by natural causes, 13% suicide, 10% open verdicts, and 19% other outcomes.
Since 2004 it has been possible for the coroner to record a narrative verdict, recording the circumstances of a death without apportioning blame or liability. Since 2009, other possible verdicts have included "alcohol/drug related death" and "road traffic collision". The civil standard of proof, on the balance of probabilities, is used for all conclusions. The standard of proof for suicide and unlawful killing changed in 2018 from beyond all reasonable doubt to the balance of probabilities following a case in the courts of appeal.
Owing in particular to the failures to notice the serial murder committed by Harold Shipman, the Coroners and Justice Act 2009 modernised the system with: | [
{
"paragraph_id": 0,
"text": "Inquests in England and Wales are held into sudden or unexplained deaths and also into the circumstances of and discovery of a certain class of valuable artefacts known as \"treasure trove\". In England and Wales, inquests are the responsibility of a coroner, who operates under the jurisdiction of the Coroners and Justice Act 2009. In some circumstances where an inquest cannot view or hear all the evidence, it may be suspended and a public inquiry held with the consent of the Home Secretary.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There is a general duty upon every person to report a death to the coroner if an inquest is likely to be required. However, this duty is largely unenforceable in practice and the duty falls on the responsible registrar. The registrar must report a death where:",
"title": "Where an inquest is needed"
},
{
"paragraph_id": 2,
"text": "The coroner must hold an inquest where the death is:",
"title": "Where an inquest is needed"
},
{
"paragraph_id": 3,
"text": "Where the cause of death is unknown, the coroner may order a post mortem examination in order to determine whether the death was violent. If the death is found to be non-violent, an inquest is unnecessary.",
"title": "Where an inquest is needed"
},
{
"paragraph_id": 4,
"text": "In 2004 in England and Wales, there were 514,000 deaths of which 225,500 were referred to the coroner. Of those, 115,800 resulted in post-mortem examinations and there were 28,300 inquests, 570 with a jury. In 2014 the Royal College of Pathologists claimed that up to 10,000 deaths a year recorded as being from natural causes should have been investigated by inquests. They were particularly concerned about people whose death occurred as a result of medical errors. \"We believe a medical examiner would have been alerted to what was going on in Mid-Staffordshire long before this long list of avoidable deaths reached the total it did,\" said Archie Prentice, the pathologists' president.",
"title": "Where an inquest is needed"
},
{
"paragraph_id": 5,
"text": "A coroner must summon a jury for an inquest if the death was not a result of natural causes and occurred when the deceased was in state custody (for example in prison, police custody, or whilst detained under the Mental Health Act 1983); or if it was the result of an act or omission of a police officer; or if it was a result of a notifiable accident, poisoning or disease. The senior coroner can also call a jury at his or her own discretion. This discretion has been heavily litigated in light of the Human Rights Act 1998, which means that juries are required now in a broader range of situations than expressly required by statute.",
"title": "Juries"
},
{
"paragraph_id": 6,
"text": "The purpose of the inquest is to answer four questions:",
"title": "Scope of inquest"
},
{
"paragraph_id": 7,
"text": "Evidence must be solely for the purpose of answering these questions and no other evidence is admitted. It is not for the inquest to ascertain \"how the deceased died\" or \"in what broad circumstances\", but \"how the deceased came by his death\", a more limited question. Moreover, it is not the purpose of the inquest to determine, or appear to determine, criminal or civil liability, to apportion guilt or attribute blame. For example, where a prisoner hanged himself in a cell, he came by his death by hanging and it was not the role of the inquest to enquire into the broader circumstances such as the alleged neglect of the prison authorities that might have contributed to his state of mind or given him the opportunity. However, the inquest should set out as many of the facts as the public interest requires.",
"title": "Scope of inquest"
},
{
"paragraph_id": 8,
"text": "Under the terms of article 2 of the European Convention of Human Rights, governments are required to \"establish a framework of laws, precautions, procedures and means of enforcement which will, to the greatest extent reasonably practicable, protect life\". The European Court of Human Rights has interpreted this as mandating independent official investigation of any death where public servants may be implicated. Since the Human Rights Act 1998 came into force, in those cases alone, the inquest is now to consider the broader question \"by what means and in what circumstances\".",
"title": "Scope of inquest"
},
{
"paragraph_id": 9,
"text": "In disasters, such as the 1987 King's Cross fire, a single inquest may be held into several deaths.",
"title": "Scope of inquest"
},
{
"paragraph_id": 10,
"text": "Inquests are governed by the Coroners Rules. The coroner gives notice to near relatives, those entitled to examine witnesses and those whose conduct is likely to be scrutinised. Inquests are held in public except where there are real issues and substantial of national security but only the portions which relate to national security will be held behind closed doors.",
"title": "Procedure"
},
{
"paragraph_id": 11,
"text": "Individuals with an interest in the proceedings, such as relatives of the deceased, individuals appearing as witnesses, and organisations or individuals who may face some responsibility in the death of the individual, may be represented by a legal professional be that a solicitor or barrister at the discretion of the coroner. Witnesses may be compelled to testify subject to the privilege against self-incrimination.",
"title": "Procedure"
},
{
"paragraph_id": 12,
"text": "If there are matters of national security or matters which relate to sensitive matters then under Schedule 1 of the Coroners and Justice Act 2009 an inquest may be suspended and replaced by a public inquiry under s.2 of the Inquiries Act 2005. This can only be ordered by the Home Secretary and must be announced to Parliament with the coroner in charge being informed and the next of kin being informed. The next of kin and coroner can appeal the decision of the Home Secretary.",
"title": "Procedure"
},
{
"paragraph_id": 13,
"text": "The following conclusions (formerly called verdicts) are not mandatory but are strongly recommended:",
"title": "Verdict or conclusions"
},
{
"paragraph_id": 14,
"text": "In 2004, 37% of inquests recorded an outcome of death by accident / misadventure, 21% by natural causes, 13% suicide, 10% open verdicts, and 19% other outcomes.",
"title": "Verdict or conclusions"
},
{
"paragraph_id": 15,
"text": "Since 2004 it has been possible for the coroner to record a narrative verdict, recording the circumstances of a death without apportioning blame or liability. Since 2009, other possible verdicts have included \"alcohol/drug related death\" and \"road traffic collision\". The civil standard of proof, on the balance of probabilities, is used for all conclusions. The standard of proof for suicide and unlawful killing changed in 2018 from beyond all reasonable doubt to the balance of probabilities following a case in the courts of appeal.",
"title": "Verdict or conclusions"
},
{
"paragraph_id": 16,
"text": "Owing in particular to the failures to notice the serial murder committed by Harold Shipman, the Coroners and Justice Act 2009 modernised the system with:",
"title": "Modernisation"
}
]
| Inquests in England and Wales are held into sudden or unexplained deaths and also into the circumstances of and discovery of a certain class of valuable artefacts known as "treasure trove". In England and Wales, inquests are the responsibility of a coroner, who operates under the jurisdiction of the Coroners and Justice Act 2009. In some circumstances where an inquest cannot view or hear all the evidence, it may be suspended and a public inquiry held with the consent of the Home Secretary. | 2001-11-21T21:44:09Z | 2023-09-07T17:00:54Z | [
"Template:ISBN",
"Template:Short description",
"Template:Main",
"Template:Not a typo",
"Template:Reflist",
"Template:Cite web",
"Template:Use dmy dates",
"Template:Cite legislation UK",
"Template:UK SI",
"Template:Small",
"Template:Rp",
"Template:Harvp",
"Template:Cite news",
"Template:UK-LEG",
"Template:Cite book"
]
| https://en.wikipedia.org/wiki/Inquests_in_England_and_Wales |
15,270 | Index | Index (pl.: indexes or indices) may refer to: | [
{
"paragraph_id": 0,
"text": "Index (pl.: indexes or indices) may refer to:",
"title": ""
}
]
| Index may refer to: | 2001-11-22T06:36:33Z | 2023-12-11T19:01:27Z | [
"Template:Self reference",
"Template:Wiktionary",
"Template:Plural form",
"Template:Tocright",
"Template:Look from",
"Template:In title",
"Template:Disambiguation"
]
| https://en.wikipedia.org/wiki/Index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.